Vocal user interfaces

There are many cases where an end user needs to send vocal instructions to a device because he cannot use his hands: when driving, when holding something or because the environment he is working in prevents him to use them (e.g. surgery room to avoid contamination).

At scriptr.io, we believe that the near future should see more and more importance given to vocal interfaces, as a complement to the current tactile ones, so we decided to provide you with everything you need to “voice-control enable” your IoT apps built with scriptr.io!

scriptr.io’s connector to wit.ai

wit.ai knows how to interpret vocal commands and map them to structured objects. Using wit.ai is pretty straightforward: check their tutorials and documentation for more on how to create “intents” and “entities” based on vocal commands.

In order to map voice instructions to your scriptr.io APIs we have implemented a wit.ai connector. Import the connector’s scripts from our Github repository to your own repository and get started!

Step by step instructions on how to proceed are provided in the Readme.md file.