Announcement

Collapse
No announcement yet.

Using HomeSeer's parsing engine for the Echo

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    Using HomeSeer's parsing engine for the Echo

    We have been working on a skill that has been stripped down to the bare minimum, and using HomeSeer's parsing engine to handle the Echo requests.

    The new skill schema has one function, and the sample utterances has one line:

    {
    "intents": [
    {
    "intent": "ParseText",
    "slots": [
    {
    "name": "TextToParse",
    "type": "LITERAL"
    }
    ]
    }
    ]
    }


    ParseText {The Text|TextToParse}


    This allow the phrase in it's entirety to be passed to HS3 and be processed there.

    Using standard practices with the echo, devices work better with simple names that are unique.

    With this new skill, complex names are fine, and having multiple devices with the same name is not an issue.

    Commands will now be possible as well:

    'Tell HomeSeer to in 10 minutes dim the living room lights to 10 percent for two hours"


    You must have at least version 230 of HomeSeer to use this type of skill.
    This version is currently in the beta section of the updater.

    For those wishing to set up their own skill:

    Endpoint: https://myhs.homeseer.com/process_echo/

    Do you allow users to create an account or link to an existing account with you? Select 'Yes'
    Authorization URL: https://myhs.homeseer.com/echo/user_app/
    Privacy Policy URL: https://www.homeseer.com/privacy

    You will need the vendorID from the 'Redirect URL' field for when you log in to link your skill to myHS.
    Last edited by Sgt. Shultz; February 3, 2016, 03:07 PM.
    Wade

    "I know nothing... nothing!"

    #2
    Is there or will there be the possibility of not having to say "tell homeseer to" or needing to address the device as Alexa? These things are deal breakers for me.
    Originally posted by rprade
    There is no rhyme or reason to the anarchy a defective Z-Wave device can cause

    Comment


      #3
      Originally posted by S-F View Post
      Is there or will there be the possibility of not having to say "tell homeseer to" or needing to address the device as Alexa? These things are deal breakers for me.
      Changing how those things work are deal breakers for Amazon

      (In other words, no, you have to always have a keyword that tells it which skill should handle the command, and your only options are addressing it as Alexa or Amazon (or Echo, thanks waynehead99). These are Amazon requirements, not anything HS can influence.)

      Comment


        #4
        Originally posted by S-F View Post
        Is there or will there be the possibility of not having to say "tell homeseer to" or needing to address the device as Alexa? These things are deal breakers for me.
        I can't answer the first part, but I do know that changing the name from Alexa... well you only have one option and that is Echo . Amazon controls that part and it hasn't changed since they created it, though there has been talks to adding more names. I suspect they picked Alexa from lots of research and figured its the least common word used daily, and easiest to have the device wake up on.

        Personally I don't mind that part as much as "tell homeseer", i think they could tie into the light API... maybe?

        Comment


          #5
          Originally posted by waynehead99 View Post
          I can't answer the first part, but I do know that changing the name from Alexa... well you only have one option and that is Echo . Amazon controls that part and it hasn't changed since they created it, though there has been talks to adding more names. I suspect they picked Alexa from lots of research and figured its the least common word used daily, and easiest to have the device wake up on.

          Personally I don't mind that part as much as "tell homeseer", i think they could tie into the light API... maybe?
          That's a path that HS has already started pursuing for the simple stuff. The "tell HomeSeer" skill is for more robust interaction.

          Comment


            #6
            It looks like we may have the API one out by the end of the week.

            Between the two, it should be a fairly robust solution.

            Now if they would only add more commands to the API...
            Wade

            "I know nothing... nothing!"

            Comment


              #7
              Originally posted by waynehead99 View Post
              I suspect they picked Alexa from lots of research and figured its the least common word used daily, and easiest to have the device wake up on.

              Personally I don't mind that part as much as "tell homeseer", i think they could tie into the light API... maybe?
              I heard a funny story that people were watching the debates on TV and their echos kept going off when it heard the word, election. I don't know maybe it activate during viagra commercials too.

              I have to agree with your comment on "tell {name}" I watched a youtube video of another HS user demonstrating this. (Sorry, can remember his name) I couldn't get over how strange it sounded to say, "Alexa tell Charlotte to turn the living room lights on"

              I think the only other alternative that I've seen is that someone had made a library that mimiced a wink (I, think) hub. I believe it was a vera user that built this. Since that is a first class integration you could get away without the "tell {name}" part. However, that only gives you very limited functionality for turning on/off. (as mentioned)
              - Tom

              HSPro/Insteon
              Web Site
              YouTube Channel

              Comment


                #8
                Originally posted by tpchristian View Post
                I think the only other alternative that I've seen is that someone had made a library that mimiced a wink (I, think) hub. I believe it was a vera user that built this. Since that is a first class integration you could get away without the "tell {name}" part. However, that only gives you very limited functionality for turning on/off. (as mentioned)
                But that's the part that we're already getting with the API integration that's coming, so no point in going to those lengths!

                Comment


                  #9
                  Originally posted by tpchristian View Post
                  I think the only other alternative that I've seen is that someone had made a library that mimiced a wink (I, think) hub. I believe it was a vera user that built this. Since that is a first class integration you could get away without the "tell {name}" part. However, that only gives you very limited functionality for turning on/off. (as mentioned)
                  HS has said they are also working on a lighting API integration, so in two millennia from now when amazon gets around to approving it we should be able to just say "alexa, turn off den lights". Without the "tell homeseer to" part.

                  (Shameless self promotion)
                  If you want that functionality now, there's the HA Bridge software you mentioned. I wrote a script that will automatically configure it for you so you don't have to spend an hour entering in on and off URLs for every one of your lights. My script also can serves as an Alexa skill, you can use either or both parts. Script is here: http://board.homeseer.com/showthread.php?p=1213711

                  Comment


                    #10
                    Is it going to be possible to run the new skill locally without having to go through myHS? Can one just set up a development skill and point it at the the external address of their local HS system?

                    Not that I have anything against myHS, and it's not even a security concern, it's just not wanting to have another link in the chain that might go down.

                    Comment


                      #11
                      Originally posted by Thrag View Post
                      Is it going to be possible to run the new skill locally without having to go through myHS? Can one just set up a development skill and point it at the the external address of their local HS system?

                      Not that I have anything against myHS, and it's not even a security concern, it's just not wanting to have another link in the chain that might go down.

                      Why, yes you can!

                      the URL syntax is:

                      /JSON?request=voicecommand&phrase=turn on the living room light

                      Place this after the address to your HS3 system and you're in business.

                      (be sure to replace 'turn on the living room light' with whatever phrase you need)
                      Wade

                      "I know nothing... nothing!"

                      Comment


                        #12
                        I mean can we just set our system as the endpoint in the skill setup? Like for my thing now I have "https://my.server.com/HS3EchoPlugin.aspx" as the endpoint, what would be the endpoint to get Alexa to talk directly to the new skill on my server?

                        Comment


                          #13
                          I doubt Amazon will/would allow anything like direct association outside of there cloud. Plus the device is setup to rely on the cloud to process it's request and understand what you are sayin.

                          Comment


                            #14
                            Originally posted by waynehead99 View Post
                            I doubt Amazon will/would allow anything like direct association outside of there cloud. Plus the device is setup to rely on the cloud to process it's request and understand what you are sayin.
                            What I'm talking about is entirely possible right now. It's how I run my system. It still goes to the amazon cloud of course, but for any skill once the amazon cloud is done figuring out what you said and what skill it needs to send it to it goes to whatever is configured for the skill's endpoint URL.

                            Anyone can log in to Amazon's developer console and create their own skill. There's a screen where you set things like the the name you want to use when speaking and the URL of the web service that the request is sent to. I set up a development skill and it's pointed to an aspx page on my own server that handles the Alexa request.

                            For official published Alexa skills that you from Amazon itself, like the official HS skill, the URL set as the endpoint to is the same for all users, not for individual users. So for that reason it has to go through myHS (or some other central server on the HS side). However, if all the thing on their server does is find the right user's local HS server to send the request to it, and all real processing is being done on our individual local HS instances it should be possible to set up a dev skill for yourself and point it directly at your system.

                            The path the regular skill will take for a request and response is:

                            My Echo --> Amazon Cloud --> myHS --> the user's HS server --> myHS --> amazon cloud --> my echo

                            What I do with my own skill, and I'd like to do with the official HS skill, is have this

                            My Echo --> amazon cloud --> user's HS server --> amazon cloud --> my Echo


                            Setting things up this way eliminates the middle man, and thus removes another potential point of failure. Plus if you set up your own skill you can make the activation word whatever you want so rather than "tell homeseer to" you can make it "tell the house to" or go all star trek and say "tell computer to", or whatever you want.

                            Separately from skills there's the lighting API/Connected Home stuff, which I've barely looked into but based on how the HA bridge seems to work it does local communication (after it goes to the amazon cloud for text to speech of course). The speech still goes to the cloud, but once it figures out what to do it's the Echo itself that sends a request over the local network. My eventual hope is that they extend the lighting API to cover things like events and device commands other than on/off/dim so that none of this extra outside communication is necessary, nor would a skill even be necessary at that point. Basically I'm hoping my own work is made totally obsolete.

                            Comment


                              #15
                              Originally posted by Thrag View Post
                              I mean can we just set our system as the endpoint in the skill setup? Like for my thing now I have "https://my.server.com/HS3EchoPlugin.aspx" as the endpoint, what would be the endpoint to get Alexa to talk directly to the new skill on my server?
                              Yes you can.

                              Instead of pointing to your .aspx page, send your request like this :

                              HTML Code:
                              https://my.server.com/JSON?request=voicecommand&phrase=whatever your command is

                              You just need to send the entire phrase using the schema and intent outlined.

                              The response is a querystring variable called 'Response', such as:

                              { "Response":"I have turned on the christmas tree" }
                              Wade

                              "I know nothing... nothing!"

                              Comment

                              Working...
                              X