Voice assistants like Alexa, Google Assistant, and Siri have come a long way over the past few years. But despite all their improvements, one thing is holding them back: they don’t understand you. They rely too heavily on certain voice commands.
Speech recognition is just a trick

Voice assistants do not understand you. In any case, not really. When you speak to Google Home or Amazon Echo, it essentially converts your words into a text string and then compares them to the expected commands. If it finds an exact match, then it follows the set of instructions. If it doesn’t, it looks for an alternative to what to do based on the information it has, and if that doesn’t work, you get an error like: «Sorry, but I don’t know that. This is a bit more than a sleight of hand.» hand magic to make you think she understands.
He cannot use context clues to make the right guess, or even use his understanding of similar topics to justify his decisions. It’s easy to confuse voice assistants, too. Although you can ask Alex «Do you work for the NSA?» And to get an answer, if you ask «Are you secretly part of the NSA?», you will get the answer «I don’t know this» (at least at the time of writing)
People who really understand speech don’t work like that. Suppose you ask a person, “What is that clarvain in the sky? One that is curved and full of striped colors like red, orange, yellow and blue.” Even though klarvain is a made-up word, the person you asked could probably figure out from the context that you are describing a rainbow.
While you can argue that a person is turning speech into ideas, they can apply knowledge and understanding to make an answer. If you ask a person if he is secretly working for the NSA, he will give you a yes or no answer, even if that answer is a lie. A person will not say “I don’t know” to such a question. That people can lie is something that comes with real understanding.
Voice assistants can’t go beyond their programming
Voice assistants are ultimately limited to what they are programmed to expect, and going beyond them will break the process. This fact shows when third-party devices come to the game. Usually, the command to interact with them is very cumbersome, which boils down to «telling the device manufacturer to provide an optional argument.» An accurate example would be: «Tell Whirlpool to pause drying.» An even more complex example is Geneva Alexa. skill controls some GE furnaces. The user of the skill must remember to «tell Geneva» and not «tell GE» while the rest of the team. And while you can ask him to preheat the oven to 350 degrees, you can’t honor a request to raise the temperature another 50 degrees. The person can follow these requests, though.
Amazon and Google have worked very hard to overcome these hurdles, and it shows. Where previously you had to follow the above sequence to operate a smart lock, you can now say «lock the front door» instead. Alexa used to confuse «tell me a dog joke» but ask for one today and it works. They added options to the commands you use, but ultimately you still need to know what command you want to say. You must use the correct syntax, in the correct order.
And if you think this sounds a lot like Command Prompt, you’re not wrong.
Voice assistants — an unusual command line