• 11 Posts
  • 282 Comments
Joined 3 years ago
cake
Cake day: June 2nd, 2023

help-circle


  • Similar to the other user’s response, I use the calendar integration, then add the things on the calendar (say, putting the recycling out to be collected). Then I have an automation that will read out a reminder at the time it is scheduled for in the calendar.

    So the evening before recycling pickup every fortnight, it pipes up and says “Reminder: Recycling” or whatever.

    Works pretty well for these regular reoccurring things. I haven’t tried using it for one off reminders, and you can’t say “ok nabu, remind me to wish Steve a happy birthday on the 27th of February” or anything like that. Still, I’m pretty happy.

    I seem to remember needing a bit of playing to get the notification working, I’m happy to look up and post what I have in my automation if needed.


  • In Home Assistant, in the settings, if you go to Voice Assistants then click the … on your assistant and click Debug, you can see what it thought you said (and what it did).

    Setting a timer on an up to date Home Assistant will repeat back what it set. E.g. If I say “Set a timer for 2 minutes” it will say “Timer set for 2 minutes”. It says “Done” when running some Home Assistant task/automation, so it’s probably not understanding you correctly (hence what the debug option is good for). I use the cloud voice recognition as I couldn’t get the local version to understand my accent when I tried it (a year ago). It’s through Azure but is proxied by Home Assistant so they don’t know it’s you.

    The wake word responds to me, but not my girlfriend’s voice.

    My wife swears it’s sexist, she has a bit of trouble too. In the integration options you can set the sensitivity to make it more sensitive, but it does increase false activations. I have it on the most sensitive and she can activate it first time most of the time.


  • I agree that it’s not production ready and they know that too, hence the name. But in relation to your points, I plugged in some speaker as it’s not really that great of a speaker at all.

    For the wake word, at some point they did an update to add a sensitivity setting so you can make it more sensitive. You could also ty donating your voice to the training: https://ohf-voice.github.io/wake-word-collective/

    But all in all you’re spot on with the challenges. I’d add a couple more.

    With OpenAI I find it can outperform other voice assistants in certain areas. Without it, you come up across weird issues, like my wife always says “set timer 2 minutes” and it runs off to OpenAI to work out what that means. If you says “set a timer for 2 minutes” it understands immediately.

    What I wish for is the ability to rewrite requests. Local voice recognition can’t understand my accent so I use the proxied Azure speech to text via Home Assistant Clound, and it regularly thinks I’m saying “Cortana” (I’m NEVER saying Cortana!)

    Oh and I wish it could do streaming voice recognition instead of waiting for you to finish talking then waiting for a pause before trying anything. My in-laws have a google home and if you say something like “set a timer for 2 minutes” it immediately responds because it was converting to text as it went, and knew that nothing more was coming after a command like that. HAVP has perhaps a 1 second delay between finishing speaking and replying, assuming it doesn’t need another 5 seconds to go to open AI. And you have to be quiet in that 1 second otherwise it thinks you’re still talking (a problem in a busy room).


  • In my experience it’s not quite the same. Using webdav through the distro account seems that it’s fully online. And folder access or file access contacts the server.

    The virtual file experience is more of a hybrid. All the folders actually exist on disk, as well as shells for every file. If you try to open a virtual file, in the background Windows will seamlessly download it for you. At that point the file is actually on your disk. This way regularly accessed files on on your hard drive and seldom accessed ones are not, saving local hard drive space while providing an experience almost like if all the files were actually on your drive.








  • Linux’s problem is that it’s not an OS, and so suggesting people use Linux doesn’t give them much advice.

    The next problem is that linux based OSs are generally open source, which means it can be forked any number of times at any point in time.

    There’s this super awesome and super confusing think in open software where you don’t have to use the thing you are given. Want to use facebook? Must use their app. Want to use reddit? Pretty much must use their app, etc.

    But if you want to use Lemmy or Piefed, there are a dozen good choices, none are the wrong answer. Want to use Jellyfin? Well I connect with Kodi on my TV, Swiftfin on my mother’s, the Android Jellyfin app on my in-laws’ TV, Findroid (movies/TV) or Finamp (music) on my phone, etc. You don’t like an app you can still use the service just try another app or make your own. This is awesome, but super confusing to non-technical people.

    Linux distros are the same. There are dozens of popular ones, many of which are based on others, the variety of choices is awesome but for non-technical people they have no idea where to start.



  • Do you have a plan? I have a Home Assistant Voice Preview Edition and it’s great but I don’t think it can do unit conversions without connecting it to an LLM. Timers work locally.

    I guess if it’s an equation you could add automation to pick up on the phrase and reply with the conversion, but that would need each unit to be manually done and wouldn’t work for things like currency conversion that needs live data.

    Also arbitrary things would be challenging, like converting tablespoons of butter into grams or grams of rice into cups.