

Not OP, but this is a principle thing. We should not have to change our behavior to make the computers work for us. They are unthinking, unfeeling tools, despite anyone’s claims around “AI”. They change for us.
That runs into a problem of tech literacy, though. If the companies running the tech we use don’t have incentive to make it work better, you have to know how to do it yourself and that’s generally not trivial. If I were OP, I’d look into self-hosted home automation. There’s some overhead and tinkering required, but the customization and privacy gains are likely worth it.
Side note. Anyone have any knowledge of a Google Home Mini repurpose? I’m more of a software guy, and have no clue if or how I can make the hardware mine. I’ve been slowly removing my reliance on google products lately. I use it mostly as a speaker, though, so worst case is it ends up as e-waste.
Like many things, a tool is only as smart as the wielder. There’s still a ton of critical thinking that needs to happen as you do something as simple as bake bread. Using an AI tool to suggest ingredients can be useful from a creative perspective, but should not be assumed accurate at face value. Raisins and Dill? maybe ¯\(ツ)/¯, haven’t tried that one myself.
I like AI, for being able to add detail to things or act as a muse, but it cannot be trusted for anything important. This is why I’m ‘anti-AI’. Too many people (especially in leadership roles) see this tool as a solution for replacing expensive humans with something that ‘does the thinking’; but as we’ve seen elsewhere in this thread, AI CANT THINK. It only suggests items that are statistically likely to be next/near based on its input.
In the Security Operations space, we have a phrase “trust but verify”. For anything AI, I would use 'doubt, then verify" instead. That all said. AI might very well give you a pointer to the place to ask how much motrin an infant should get. Hopefully, that’s your local pediatrician.