Announcement

Collapse
No announcement yet.

VR Syntax Guide Discussion

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    #16
    Originally posted by spud View Post
    I just realized that you can't do that if you are using device commands, so I may have to add support for optional words within choices.
    Oh - this would provide me infinitely more options and usability! Even if you just included a simple syntax change, where we could include "OR" in the device command string... for example:

    "Turn On [the] office [lights]" OR "Turn [the] Office [lights] on"

    ... as a quick and dirty way to implement it.

    P.S., I think S-F and I are Home Automation Brothers. I see him lurking in the same threads I frequent.

    hjk
    ---

    Comment


      #17
      @ Spud,

      Maybe just make it so that comma separated phrases are individually evaluated. That was the existing syntax wouldn't need to be changed.

      @ hjk,

      We're all home automation brothers. All of us with the exception of the home automation sisters that is.

      That said, next time you're in the North East drop me a line and I'll buy you a beer.

      That's an open offer to everyone on the board too!
      Originally posted by rprade
      There is no rhyme or reason to the anarchy a defective Z-Wave device can cause

      Comment


        #18
        in version 3.0.0.24 available here, optional words or phrases within multiple choices are now allowed:

        so this
        <turn on [the] office [lights] | turn [the] office [lights] on>
        should work

        please test and let me know

        Comment


          #19
          Small error: The semantic value in rule 'var0' was already set and cannot be changed.

          Originally posted by spud View Post
          in version 3.0.0.24 ... please test and let me know
          WOW, spud! Less than 24 hours!!

          I have only one instance and it is a remote instance. The grammar seemed to build properly using the new syntax, but when it heard the phrase, I got this error:

          Code:
          DEBUG Phrase recognized with confidence=0.9988635
          DEBUG Computer turn the office lights on
          ERROR The semantic value in rule 'var0' was already set and cannot be changed.
          DEBUG    at Microsoft.Speech.Recognition.RecognizedPhrase.InsertSemanticValueToDictionary(SemanticValue semanticValue, String propertyName, SemanticValue thisSemanticValue, GrammarOptions semanticTag, Collection`1& dupItems)
             at Microsoft.Speech.Recognition.RecognizedPhrase.RecursiveBuildSemanticProperties(IList`1 words, List`1 properties, RuleNode ruleTree, GrammarOptions semanticTag, Collection`1& dupItems)
             at Microsoft.Speech.Recognition.RecognizedPhrase.RecursiveBuildSemanticProperties(IList`1 words, List`1 properties, RuleNode ruleTree, GrammarOptions semanticTag, Collection`1& dupItems)
             at Microsoft.Speech.Recognition.RecognizedPhrase.RecursiveBuildSemanticProperties(IList`1 words, List`1 properties, RuleNode ruleTree, GrammarOptions semanticTag, Collection`1& dupItems)
             at Microsoft.Speech.Recognition.RecognizedPhrase.CalcSemantics(Grammar grammar)
             at Microsoft.Speech.Recognition.RecognizedPhrase.get_Semantics()
             at HSPI_KINECT.VoiceRecognition.SpeechRecognizedHandler(Object sender, SpeechRecognizedEventArgs e)
          I did update the remote plugin to the new build, as well as updating it on the HS server.

          Code:
          Plugin: Kinect Instance: WineRack starting...
          Connecting to server at X.X.X.X...
          Connection attempt #1
          Connected (HomeSeer API 3). Waiting to be initialized...
          DEBUG Logger Initialized
          INFO Kinect version 3.0.0.24
          DEBUG USB\VID_0409&PID_005A\5&C2588D3&0&1 status=Connected
          DEBUG Voice Recognition Initialized
          DEBUG Drawing Zones File
          DEBUG Visual Recognition Initialized
          DEBUG Web Pages Registered
          INFO Kinect initialized
          DEBUG End Init
          DEBUG Building Grammars
          .
          .
          .
          DEBUG   Rule (Devices) <turn on [the] office [lights] | turn [the] office [lights] on>
          DEBUG   Rule (Devices) <turn off [the] office [lights] | turn [the] office [lights] off>
          Hopefully this will immediately ring a bell and won't be too much of a headache to track down!

          Thanks again!!

          hjk
          ---

          Comment


            #20
            Originally posted by S-F View Post
            We're all home automation brothers. All of us with the exception of the home automation sisters that is.
            @S-F -- you are too funny - "with the exception of the home automation sisters"

            I may take you up on that beer offer! Hopefully you're not too buried under snow right now....

            Comment


              #21
              We're buried in snow! Nothing compared to Boston though. They have gotten about twice as much as us. We live in a mountain valley that has really peaceful weather.

              BTW, my daughter is going to be getting on a plane to Austin in a few hours.
              Originally posted by rprade
              There is no rhyme or reason to the anarchy a defective Z-Wave device can cause

              Comment


                #22
                Ha, funny! I will be just South of Austin next weekend for a buddy's wedding. It's 75 degrees, sunny, and spring-like here! Of course, we have not had a single freeze the whole season (although we've gotten close). So, that sucks. I like a good few weeks of freeze, maybe even some snow.

                Comment


                  #23
                  Originally posted by hjk View Post
                  WOW, spud! Less than 24 hours!!

                  I have only one instance and it is a remote instance. The grammar seemed to build properly using the new syntax, but when it heard the phrase, I got this error:

                  Code:
                  DEBUG Phrase recognized with confidence=0.9988635
                  DEBUG Computer turn the office lights on
                  ERROR The semantic value in rule 'var0' was already set and cannot be changed.
                  DEBUG    at Microsoft.Speech.Recognition.RecognizedPhrase.InsertSemanticValueToDictionary(SemanticValue semanticValue, String propertyName, SemanticValue thisSemanticValue, GrammarOptions semanticTag, Collection`1& dupItems)
                     at Microsoft.Speech.Recognition.RecognizedPhrase.RecursiveBuildSemanticProperties(IList`1 words, List`1 properties, RuleNode ruleTree, GrammarOptions semanticTag, Collection`1& dupItems)
                     at Microsoft.Speech.Recognition.RecognizedPhrase.RecursiveBuildSemanticProperties(IList`1 words, List`1 properties, RuleNode ruleTree, GrammarOptions semanticTag, Collection`1& dupItems)
                     at Microsoft.Speech.Recognition.RecognizedPhrase.RecursiveBuildSemanticProperties(IList`1 words, List`1 properties, RuleNode ruleTree, GrammarOptions semanticTag, Collection`1& dupItems)
                     at Microsoft.Speech.Recognition.RecognizedPhrase.CalcSemantics(Grammar grammar)
                     at Microsoft.Speech.Recognition.RecognizedPhrase.get_Semantics()
                     at HSPI_KINECT.VoiceRecognition.SpeechRecognizedHandler(Object sender, SpeechRecognizedEventArgs e)
                  I did update the remote plugin to the new build, as well as updating it on the HS server.

                  Code:
                  Plugin: Kinect Instance: WineRack starting...
                  Connecting to server at X.X.X.X...
                  Connection attempt #1
                  Connected (HomeSeer API 3). Waiting to be initialized...
                  DEBUG Logger Initialized
                  INFO Kinect version 3.0.0.24
                  DEBUG USB\VID_0409&PID_005A\5&C2588D3&0&1 status=Connected
                  DEBUG Voice Recognition Initialized
                  DEBUG Drawing Zones File
                  DEBUG Visual Recognition Initialized
                  DEBUG Web Pages Registered
                  INFO Kinect initialized
                  DEBUG End Init
                  DEBUG Building Grammars
                  .
                  .
                  .
                  DEBUG   Rule (Devices) <turn on [the] office [lights] | turn [the] office [lights] on>
                  DEBUG   Rule (Devices) <turn off [the] office [lights] | turn [the] office [lights] off>
                  Hopefully this will immediately ring a bell and won't be too much of a headache to track down!

                  Thanks again!!

                  hjk
                  ---
                  thanks for testing.
                  Please try 3.0.0.25, this problem should be fixed.

                  Comment


                    #24
                    It works! And a suggestion

                    Originally posted by spud View Post
                    thanks for testing.
                    Please try 3.0.0.25, this problem should be fixed.
                    You, sir, are a GENIUS and a half. That did it! I can use the alternate phrases. This seems to be working for event triggers too!

                    If I may propose one additional feature, though adding it may take a bit more maneuvering.

                    For event triggers, we see the examples where multiple options can be spoken, then read by global variables in a script:
                    Do [my] event in <one | two | three > minutes

                    When we work with devices, there is a "(value)" tag which applies to the setting:
                    Set the thermostat to (value) degrees

                    What would it take to combine these techniques in an event trigger? For example, I'm envisioning a second, more complex trigger type:
                    Kinect: A Variable Phrase was Recognized
                    Phrase = Do [my] event in (value)
                    Units = Minutes <-- selected from a dropdown of time units that HS provides - Days, Minutes, Seconds, etc

                    Or maybe it's possible to parse that from the phrase, like the desktop VR does with the speaker client... you get what I mean? If THAT is the case, I could potentially speak: Do my event in three minutes and thirty seconds.

                    Just something to make interaction a little more natural, I suppose.

                    Spud, this plugin is a work of genius. Thank you, and thank you for the support this weekend! I am a trial user right now but will convert straightaway!

                    Comment


                      #25
                      With all due respect hjk, I think that what you're proposing will make a complicated situation even more complicated. There are a few things from MCV Vera which I miss and absolutely! One of them is how the speech recognition clients operate. They take an inventory of your devices and allow you to operate them by name with specific commands given their properties. IMO this is what is needed. So if you have a litght called "bathroom light" you can say "turn the bathroom light off" and, voila, it turns off. No special programming needed. ALL of the Vera SR solutions have HS3 beat into the ground in this respect. Not to say that I don't love the Kinect plugin! Different situations though.
                      Originally posted by rprade
                      There is no rhyme or reason to the anarchy a defective Z-Wave device can cause

                      Comment


                        #26
                        I took matters into my own hands for this (to a point) - I have a Generic Script that I call Kinect Collection. So for each of my 4 kinects (5, but one not being used yet) I have a long trigger for events (its a pain but it works). SO you can ask for certain things like the weather, motion in rooms, etc ... I have to add lighting to it, but that makes for a long recognition string...

                        So the script simply parses based on the Kinectvariables it finds and it then matches it to the speaker client in that room to reply only to that room. Its as messy method, but works.

                        Im hoping Spud can add something that will simply record what you say to a .wav file, a feature I asked him for at some point - Then we can take the raw recording and do a text to speech on it and parse it for certain text (ie : kitchen light on) but you may have said "Please turn the kitchen light on" - Loooking at a subset of matching words and making it do something based on those. My end goal was something around Wiki searching using this ("who is walt Disney") ... But we need an open trigger for this stuff. Theres a lot of Speech to Text libraries. Even easier would be using the Bing Voice applet for 8.x - it does the Speech to text for us in a few lines

                        Went off target - I know ... Sorry...

                        Comment


                          #27
                          @surovich - I'm hoping that the speech libraries will be "fixed" in Windows 10 with the Cortana integration. That should mean that the HS native speaker client, which relies on the OS's speech libraries, will be much improved.

                          Maybe at that time, Kinect would also be able to take advantage. As an array mic, its input is much better than an ordinary microphone.

                          @S-F - Not trying to be too complicated... but I would like to restore some of the functionality you get natively with the HS speech client. I don't have to program recognition strings to do scheduling of events, for example.

                          On the OTHER hand, though - the HS3 help file publishes some long example recognition strings that can probably be adapted for Kinect.

                          As an amateur developer, I think it's usually best to bring as many options as make sense to the presentation layer. I.e., if I want to make a change, it's better to be able to do it through the UI than have to edit a script. That's because if I don't look at said script for a number of weeks/months, I may forget what exactly I was doing there, and will either get lost, confused, butcher it, or waste time re-learning what I already did.

                          This may be because I'm slow, which I freely admit. But remember the adage: a lazy admin is a good admin!

                          Regarding Vera/other solutions - I've never used anything but HS (HS2 starting in what... 2010 for me?), so if there's greener grass, I haven't experienced it. Probably better that way!

                          Comment


                            #28
                            Originally posted by surovich View Post
                            I took matters into my own hands for this (to a point) - I have a Generic Script that I call Kinect Collection. So for each of my 4 kinects (5, but one not being used yet) I have a long trigger for events (its a pain but it works). SO you can ask for certain things like the weather, motion in rooms, etc ... I have to add lighting to it, but that makes for a long recognition string...

                            So the script simply parses based on the Kinectvariables it finds and it then matches it to the speaker client in that room to reply only to that room. Its as messy method, but works.

                            Im hoping Spud can add something that will simply record what you say to a .wav file, a feature I asked him for at some point - Then we can take the raw recording and do a text to speech on it and parse it for certain text (ie : kitchen light on) but you may have said "Please turn the kitchen light on" - Loooking at a subset of matching words and making it do something based on those. My end goal was something around Wiki searching using this ("who is walt Disney") ... But we need an open trigger for this stuff. Theres a lot of Speech to Text libraries. Even easier would be using the Bing Voice applet for 8.x - it does the Speech to text for us in a few lines

                            Went off target - I know ... Sorry...
                            It would be awesome if it could record the raw audio, and you could then call the google speech APIs to have it parse the audio to text (supposedly that api is amazing at doing that, and is used for a lot of voice recognition tasks nowadays) and then do as you said, check the text for particular trigger words.

                            Comment


                              #29
                              Originally posted by surovich View Post
                              Im hoping Spud can add something that will simply record what you say to a .wav file, a feature I asked him for at some point - Then we can take the raw recording and do a text to speech on it and parse it for certain text (ie : kitchen light on) but you may have said "Please turn the kitchen light on" - Loooking at a subset of matching words and making it do something based on those. My end goal was something around Wiki searching using this ("who is walt Disney") ... But we need an open trigger for this stuff. Theres a lot of Speech to Text libraries. Even easier would be using the Bing Voice applet for 8.x - it does the Speech to text for us in a few lines

                              Went off target - I know ... Sorry...
                              in version 3.0.0.28, you can now record to a wav file any recognized phrase, and you can use wildcard in the voice recognition syntax

                              see this post for more information

                              Comment


                                #30
                                Spud -

                                Im having the issue in this thread now that I update to .29 -

                                Apr-11 10:44:18 AM Kinect DEBUG at Microsoft.Speech.Recognition.RecognizedPhrase.InsertSemantic ValueToDictionary(SemanticValue semanticValue, String propertyName, SemanticValue thisSemanticValue, GrammarOptions semanticTag, Collection`1& dupItems) at Microsoft.Speech.Recognition.RecognizedPhrase.RecursiveBuild SemanticProperties(IList`1 words, List`1 properties, RuleNode ruleTree, GrammarOptions semanticTag, Collection`1& dupItems) at Microsoft.Speech.Recognition.RecognizedPhrase.RecursiveBuild SemanticProperties(IList`1 words, List`1 properties, RuleNode ruleTree, GrammarOptions semanticTag, Collection`1& dupItems) at Microsoft.Speech.Recognition.RecognizedPhrase.RecursiveBuild SemanticProperties(IList`1 words, List`1 properties, RuleNode ruleTree, GrammarOptions semanticTag, Collection`1& dupItems) at Microsoft.Speech.Recognition.RecognizedPhrase.CalcSemantics( Grammar grammar) at Microsoft.Speech.Recognition.RecognizedPhrase.get_Semantics( ) at HSPI_KINECT.VoiceRecognition.SpeechRecognizedHandler(Object sender, SpeechRecognizedEventArgs e)

                                Apr-11 10:44:18 AM Kinect ERROR The semantic value in rule 'control' was already set and cannot be changed.

                                Apr-11 10:44:18 AM Kinect DEBUG Computer

                                Apr-11 10:44:18 AM Kinect DEBUG Phrase recognized with confidence=0.9983721

                                Apr-11 10:44:18 AM Kinect TRACE Phrase hypothesized = Computer, Confidence=0.9973238

                                Apr-11 10:44:17 AM Kinect TRACE Phrase hypothesized = Computer, Confidence=0.1191543

                                Apr-11 10:44:17 AM Kinect TRACE Speech Detected:


                                It works for a few questions, then I ask it to turn off the lights and it stops working and throws the above in the log?

                                Comment

                                Working...
                                X