Announcement

Collapse
No announcement yet.

Using HomeSeer's parsing engine for the Echo

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    I don't know then. My myhs account is setup and working. Up until last night, I have been using the old homeseer skill (setup manually like this) with no problems. I've disabled it from the alexa app and created a bran new skill for this. I just tried disabling the new skill and re-enabling the old skill and I could link my account just fine. Just not able to link using the new skill.

    I'm thinking maybe I should instead try modifying the old skill since it seems to be able to link. I just didn't want to lose all the interaction model and utterances in case I couldn't get this new one working.

    Here are my new skill screenshots




    Comment


    • #32
      So I got it work by editing the old skill and replacing the Intent Schema and Sample Utterances with the new stuff. That seemed to be the only difference. No issues whatsoever. I just could not get it to work at all when trying to create a new skill.

      I did notice, however, in the old skill, I had a Client Id with a value of:
      amzn1.application-oa2-client.d839fbb21603478dbd332ab26fc1f1f1

      I have no idea where that came from... I assume I got that from somewhere and put it in when I originally created that skill. So I left it in there and it appears to work fine.

      Comment


      • #33
        Originally posted by Sgt. Shultz View Post
        We have been working on a skill that has been stripped down to the bare minimum, and using HomeSeer's parsing engine to handle the Echo requests.

        The new skill schema has one function, and the sample utterances has one line:

        {
        "intents": [
        {
        "intent": "ParseText",
        "slots": [
        {
        "name": "TextToParse",
        "type": "LITERAL"
        }
        ]
        }
        ]
        }

        Endpoint: https://myhs.homeseer.com/process_echo/
        Hi,
        This is interesting. I have written a custom skill that incorporates the ability to register an echo device to a room, and then pass that to homeseer.
        So, for example, when a user in the Living Room says,
        Code:
        "Alexa, tell Jarvis to turn on the lamp"
        -- I xform this to
        Code:
        Alexa, tell Jarvis to turn on the Living Room Lamp
        .
        But its smart enough that if a user says "Alexa, tell Jarvis to Turn on the Kitchen Lights", the utterance trumps the echo room registration.
        Right now, I simply pass this to homeseer with
        Code:
        /JSON?request=voicecommand&phrase=
        .
        I am wondering, though, whether it would be better to link my skill to homeseer as suggested above -- but then I need the json syntax to package and send my xformed speech...to the process_echo endpoint.
        In this way, I can enjoy the security and processing engine of myhs and easy linking to a user account...while still lighting up room registration functionality.

        s.

        Comment


        • #34
          Not to resurrect a 3-year old thread or anything... but I'm just now finding out about passing VR text directly through HS's JSON interface. What a GREAT feature to use HS for it's complex processing capabilities and any other device for the actual VR. I finally connected up some mics & speakers using the ClearOne XAP800's and absolutely hate the VR of Windows 10. I'm not using the Alexa skill yet as I may look into other apps for VR that could spit out the recognized text like this (don't have an Android, so no Automate for me ). Possibly work on some AI intelligence that could be in between me & HS. I use Siri via Homebridge and love the recognition reliability but really miss the multi-command possibilities we can bake into our vb scripts.

          Using the JSON interface, I can trigger lights/devices on/off by manually entering voice commands in the URL but what do you do when HS asks for confirmation? I tried sending another phrase of just yes or no and HS never accepts and never runs the command/event. I can remove the confirmation on the Voice page of settings, but what if I wanted to script question responses from HS or just leave HS open to receive another command as an extension of the original one? ie."Turn on the lights" could run a script that knows where I am and turn on the lights in that room. If I then want the script to be in control and "listening" for say, another 15 seconds in case I send a follow up command that should be in the same context. Like after the lights turn on say, "brighter" and have the running script catch that and process the lights brighter. Or when I come home, have the TTS speak a greeting but then ask me if I'd like to, "turn on the media center?" to which I would have to answer just yes or no.

          Is there a list of apps/devices that will perform VR and spit out the recognized text to a user-entered URL? I see the Kinect is praised for its VR capabilities but does it also pass along the recognized command as text or does it directly interface with HS devices, like Homebridge does? And if there is a VR package/device that performs as well as Kinect/Alexa/Siri, can it be adapted to hit the HS JSON URL without any "tell homeseer to..." rules?

          Comment


          • #35
            Make sure your yes/no response is lower case as that response code is case sensitive.

            Wade

            sigpic

            website | products | support | youtube
            I know nothing...., nothing!!!

            Comment


            • #36
              RandyInLA, I think you can accomplish what you're asking for in regards to the conversation aspect with Jon00's Echo Skill Helper.

              https://forums.homeseer.com/forum/3r...for-homeseer-3

              You do need to use the "tell/ask homeseer" prefix, however.

              I got Jon00's helper to work with Google Home devices:

              https://forums.homeseer.com/forum/ho...stom-responses

              I too attempted to use a Kinect microphone with the Speaker app but that didn't work well enough for me and did not appear to utilize the JSON api allowing for custom responses. So, I wrote my own app which would record audio, send to Google for transcription, then send to HS. My app worked well, but I abandoned using it and developing it further because the 3rd party app I used for the always-listening portion, which triggered my app, did not work well for me as it relied on Windows 10 speech API:

              https://forums.homeseer.com/forum/ho...olume-and-more

              Echo Dots and Google Home Minis are cheap and easy to place everywhere. I'm happy with my Google Home devices and custom Action that I created. I need to say "Ok Google, tell Homeseer _______". For commands I say a lot I can create a Google Routine so that I can skip the "tell Homeseer" part.

              I do not believe the Kinect plugin utilizes HSs text parsing engine and/or cannot utilize Jon00's helper, which is why I never tried that plugin. Other than that, I read that plugin works well.

              Good luck!
              HomeSeer 3, Insteon, Z-wave, USB-UIRT, Google Hub/Chromecasts/Smart Speakers, Foscam cameras, Amcrest camera, RCA HSDB2a doorbell
              Plugins: BLLAN, BLOccupied, BLUSBUIRT, Chromecast, Insteon, Jon00 Homeseer/Echo Skill Helper, Jon00 DB Charting, MediaController, NetCAM, PHLocation2, Pushover 3P, weatherXML, Z-wave

              Comment


              • #37
                Originally posted by Sgt. Shultz View Post
                Make sure your yes/no response is lower case as that response code is case sensitive.
                Thanks, Wade, but for some reason, my replies to confirmation questions are working today. Tried both "yes" &" YES" and they both are recognized and the events are run.

                JSON?request=voicecommand&phrase=run%20the%20speak%20evening%20commute%20event
                { "Response":"You want me to run the speak evening commute event?" }

                JSON?request=voicecommand&phrase=YES
                { "Response":"OK, running the event Speak Evening Commute" }

                Comment


                • #38
                  Originally posted by mrceolla View Post
                  RandyInLA, I think you can accomplish what you're asking for in regards to the conversation aspect with Jon00's Echo Skill Helper.

                  https://forums.homeseer.com/forum/3r...for-homeseer-3

                  You do need to use the "tell/ask homeseer" prefix, however.

                  I got Jon00's helper to work with Google Home devices:

                  https://forums.homeseer.com/forum/ho...stom-responses

                  I too attempted to use a Kinect microphone with the Speaker app but that didn't work well enough for me and did not appear to utilize the JSON api allowing for custom responses. So, I wrote my own app which would record audio, send to Google for transcription, then send to HS. My app worked well, but I abandoned using it and developing it further because the 3rd party app I used for the always-listening portion, which triggered my app, did not work well for me as it relied on Windows 10 speech API:

                  https://forums.homeseer.com/forum/ho...olume-and-more

                  Echo Dots and Google Home Minis are cheap and easy to place everywhere. I'm happy with my Google Home devices and custom Action that I created. I need to say "Ok Google, tell Homeseer _______". For commands I say a lot I can create a Google Routine so that I can skip the "tell Homeseer" part.

                  I do not believe the Kinect plugin utilizes HSs text parsing engine and/or cannot utilize Jon00's helper, which is why I never tried that plugin. Other than that, I read that plugin works well.

                  Good luck!
                  That's funny you mentioned creating your own VR w/Google because I did the exact same thing last night! I created a Python script utilizing the speech_recognition package and was very happy with the accuracy. Was going to look into coding a "listen for the attention phrase" functionality tonight and then add in passing the recognized text to HS. But first, I want to get Jon's Alexa Helper setup. I kept ignoring it when people mentioned it because I thought it was directly tied into Alexa, but now I see it allows one to create voice commands and some other sweetness for anything that can hit the JSON URL and pass in VR text. I'll send it VR text and have it speak replies via the speaker client. The output of the Windows 10 machine is connected to a few ClearOne XAP800 audio processing units so the TTS can play out various speakers in the apartment or even directly to my iPhone using the Zello push-to-talk walkie talkie app! The overall goal will be to be able to give voice commands to HS at home or away as well as hear responses in the same TTS voice (AWS bought Ivona but they recently started selling the voices for SAPI, so I bought Brian for $35) anywhere I happen to be. First step in a Jarvis-like framework.

                  The Alexa "tell hs..." is way too verbose and combersome for me. I can't stand saying, "Hey Siri" 100 times a day, but I prefer it and Homebridge to Alexa or Google Home. Since I found out they added the ability to pass VR text via JSON, rolling my own VR seemed like the logical next step to keep voice control and lose the "Hey Siri" or any other prefix. Looking forward to diving into some scripted hooks so the system stays in "listen for another command" mode for a few seconds after each command.

                  Thanks for the links to other discussions!

                  Comment


                  • #39
                    I have a tear in my eye...

                    With Siri & Homebridge, I could never use just the word lights, as in, "Hey Siri, lights" to trigger an event because Homekit hijacked that word and toggles every single device listed as a light! Tried it again just now and lights that were off came on, lights that were on turned off and any other devices listed in Homebridge as a light toggled. SOO frustrating!! Why Apple decided to pick what "lights" should do is stupefying!

                    Using the Jon00 Echo Skill Helper, within 2 minutes, I created a trigger for just the word, "lights" (match exactly) and have it run my event that checks my current location and then toggles lights in that room. Works perfectly! I had been using the word, "Hey Siri, Lighting" for a few weeks and "Hey Siri, Illumination" for a few weeks before that. "Lights" is SO much better, imho.

                    Comment


                    • #40
                      Originally posted by mrceolla View Post
                      RandyInLAMy app worked well, but I abandoned using it and developing it further because the 3rd party app I used for the always-listening portion, which triggered my app, did not work well for me as it relied on Windows 10 speech API
                      Did you look at the PocketSphinx Python package for attention phrase detection? You can create a file with a list of attention words and it will limit recognition to what's in that file. Then, upon detection, you can continue that recording to capture the command. With multiple words, I assume that means we can call attention to the VR script multiple ways. ie.You could say, "Computer", your wife could say, "Hey Sally" etc. Or maybe set a list of ever increasing importance if it doesn't catch your word the first time. ie. "Computer", "Hey, Computer", "YO! COMPUTER", "F'ing answer me, computer" etc.

                      https://pypi.org/project/pocketsphinx/

                      Comment


                      • #41
                        I did not. Sounds cool, but I've also abandoned the idea of a PC based voice recognition system. Once I got my first Google Home mini connected to HS3 via the aforementioned Action, and also the Chromecast plugin, I went all out and placed them everywhere. I don't know of an easier, cheaper way to place mics and speakers connected to HS, everywhere.

                        I also wanted to mention that both Alexa and Google Home allow direct control of lights and other supported devices without saying "tell homeseer". It's just if you want to go outside of those bounds that you need to say it. Saying it uses the HS JSON speech API instead of whatever else is normally used.

                        Also, not sure about Alexa, but for Google Home, if you add your lights to a certain room, and assign your Google Home device to that room, you can control things generically, like "turn on the lights", and it will know which ones you're talking about. In my case I wanted "turn on the lights" to turn on a lighting scene instead of all room lights to 100%. So, when Google Home imported my scene device, I renamed it to "The Lights" and assigned it to the room. No it turns on just that device, which turns on my scene.
                        HomeSeer 3, Insteon, Z-wave, USB-UIRT, Google Hub/Chromecasts/Smart Speakers, Foscam cameras, Amcrest camera, RCA HSDB2a doorbell
                        Plugins: BLLAN, BLOccupied, BLUSBUIRT, Chromecast, Insteon, Jon00 Homeseer/Echo Skill Helper, Jon00 DB Charting, MediaController, NetCAM, PHLocation2, Pushover 3P, weatherXML, Z-wave

                        Comment


                        • #42
                          Originally posted by mrceolla View Post
                          I did not. Sounds cool, but I've also abandoned the idea of a PC based voice recognition system. Once I got my first Google Home mini connected to HS3 via the aforementioned Action, and also the Chromecast plugin, I went all out and placed them everywhere. I don't know of an easier, cheaper way to place mics and speakers connected to HS, everywhere.

                          I also wanted to mention that both Alexa and Google Home allow direct control of lights and other supported devices without saying "tell homeseer". It's just if you want to go outside of those bounds that you need to say it. Saying it uses the HS JSON speech API instead of whatever else is normally used.

                          Also, not sure about Alexa, but for Google Home, if you add your lights to a certain room, and assign your Google Home device to that room, you can control things generically, like "turn on the lights", and it will know which ones you're talking about. In my case I wanted "turn on the lights" to turn on a lighting scene instead of all room lights to 100%. So, when Google Home imported my scene device, I renamed it to "The Lights" and assigned it to the room. No it turns on just that device, which turns on my scene.
                          The localization of GH commands for knowing device location/context is nice.

                          Went through and verified all devices I want to control with original HS voice commands had their Voice Command: checkboxes checked (or they can't be found when sending the phrase via JSON) and read up on the proper voice command syntax for launching an event or creating delayed actions via voice ("in 5 minutes do this" vs "do this in 5 minutes". I see Jon00's plugin allows for saying either). Re:Ever having to say "tell hs to..", I want to say as little as possible. lol Thought about setting up a kinect to learn certain movements so I could snap a finger and gesticulate my head towards a light to have it turn on (kidding). My Siri/Homebridge will still be connected, so I can always use that until I fix a voice command via Jon00's plugin (love that plugin!).

                          Not sure many have noticed, but the Siri/HB integration (ie.Homekit) has a cool feature that will toggle any HS device's on/off status by simply saying the name of it. "Hey Siri, closet light" will turn it off if on and on if off. When sitting on the couch, I can push the Siri button on my Apple TV remote and just say, "desk lamp". Can't do that with HS and voice commands only. If I type"...&phrase=closet light" I get "Response":"No command was found in the phrase.". Might be a nice thing for Jon00 to add to his plugin? Can also adjust the behavior for devices that have multiple values. For example, for my AC fan, it doesn't have an off value. If the AC is on, the fan can be low/med/high only (status values: 1/2/3). So in Homebridge, I set offValue=1 and onValue=3... then, "Hey Siri, AC Fan" toggles between low/high and "Hey Siri, AC" toggles the AC on/off.

                          Still haven't started on the Python attention phrase solution. Googling & watching youtube clips of how other people approach it and i'm stunned at how many people made their videos all about installing the Python packages and typing the exact. same. script. word. for. word. and acting like they came up with the script themselves!

                          I'll be adding some sort of wav file effect in the Python routines to indicate the attention phrase was recognized and another after the voice command was understood and executed. Well, in hopes that it was understood correctly, anyway. Either something similar to Siri or Star Trek TNG. With Siri, if you just say, "Hey Siri" and nothing else, you get the sound that she's ready. If you say, "Hey Siri, <command>" close enough together, Siri skips the attention blip sound and goes straight to the "I understood and executed" blip sound.

                          Comment

                          Working...
                          X