I mostly understand the dilemma, but I want to see if someone has better success with their AI assistant. I use the Ollama integration and set up a conversation model. However, the default Home Assistant AI knows to use the home forecast entity whenever I ask about the weather. Whether I also set up an AI task model, toggle “control Home Assistant” on or off, or toggle “perform local commands” on or off - the Ollama models do not reference the home forecast the way the default Home Assistant can. I thought maybe keeping default commands on would enable this ability while answering all other queries with the Ollama LLM. I just want a smarter AI. Any suggestions?
Nice! These are great suggestions, and I apologize for any incorrect terminology in my post. To clarify, the vanilla / default Home Assistant can get the forecast correct every time. I just want that model to take the wheel when simple commands come through and then an LLM takes the wheel when asked random questions unrelated to home automation.
Not in HomeAssistant directly, but I made a flow in n8n where an LLM model can detect I’m asking about weather and call a weather API, parse the response and answer.
You can do something similar in Home Assistant.
Add an integration to a weather service (there might even be one out of the box).
Create an automation trigged by saying a sentence to your voice assistant.
Set the automation action to be a conversation response, and set that to whatever entity contains the part of the weather you want it to say (or a template if you want it to say multiple or other fancy things)
What I do is use externed_openai_conversation from the HACS to hook into my LLM’s OpenAI-compatible API endpoint. That one makes it available via the regular Voice Assistant stuff within Home Assistant.
Not sure what’s happening here. The Ollama page says it doesn’t have all functionality, for example it doesn’t have sentence triggers? And weather forecast is a bit of a weird one in Home Assistant. That’s not an entity (unless you configure one manually) but a service call to fetch the forecast. Maybe your AI just doesn’t have the forecast available, just the current condition and maybe current temperature. Everything else must be specifically requested with a deliberate “weather.get_forecast” call. Maybe that service call and the specific processing is in the official Assistant, but not in the Ollama integration?
Can’t piss on me, can’t tell me it’s raining. What are these things even useful for?


