hckrnws
My Journey to a reliable and enjoyable locally hosted voice assistant (2025)
by Vaslo
If you're less concerned about privacy, I use Gemini 2.5 Flash for this and it's exceptionally good and fast as a HA assistant while being much cheaper than the electricity that would be needed to keep a 3090 awake.
The thing that kills this for me (and they even mentioned it) is wake word detection. I have both the HA voice preview and FPH Satellite1 devices, plus have experimented with a few other options like a Raspberry Pi with a conference mic.
Somehow nothing is even 50% good as my Echo devices at picking up the wake word. The assistant itself is far better, but that doesn't matter if it takes 2-3 tries to get it to listen to you. If someone solves this problem with open hardware I'll be immediately buying several.
On the plus side, mine misdetected a wake word during a funny conversation and said "Sorry, I can't find any area called _____[60 second repeat of funny conversation]___" and it made my family laugh harder than we've laughed in a really long time. I even went into the tts cache and saved the wav b/c it was sooo funny.
Ha, I had something similar happen as well that had us rolling. I think the hilarity was a result of the conversation snippet being taken completely out of context by the recording. Wish I'd saved the wav, I didn't even think of that :-(
How about a button?
I'd prefer to physically press a button on an intercom box than having something churning away constantly processing sound.
If I have to go to a thing and push a button, I'd rather the button do the thing I wanted in the first place. Voice assistants are for when my hands are full or I don't want to get up. (I wrote more about my home automation philosophy in another comment[1]).
Also I have all my voice assistant devices mounted to the ceiling
What if you have two things? You'd then need two buttons.
The push button is a perfectly viable option, it just needs to be in a form factor that's works. Could be as simple as a tiny low-energy Bluetooth board with a coin battery that will last several months.
Luckily I carry around a device with infinite reconfigurable buttons!
The pebble index seems like the optimal form for this.
Could be pressed even if your hands were busy.
Most of what I (and in my experience many people) want a voice assistant for, is setting+ending timers... which for me happens mostly in the kitchen, while I'm simultaneously holding a hot pan or hand-tossing a salad or paper-towelling off some raw chicken. In none of those cases would I want a ring anywhere near my hands, let alone a smart ring. (And nor, in half of those cases, is it convenient/hygenic to use my oven timer.)
That being said, we could solve for fully 50% of in-home voice-assistant use-cases just by developing an extremely domain-specific voice assistant that has an extremely small (ideally burned-into-a-DSP) voice model that only knows how to recognize commands to manage kitchen timers. If such a device existed, and was cheap enough that you could assume anyone who wanted this functionality would just buy one, then this would make truly hands-free activation of a "real" voice-assistant much less necessary, as there'd be far fewer user-stories that would really "need" that. The rest of those user-stories really mostly could work with some kind of ring / belt buckle / shirt comm badge / etc.
If you want to relax some constraints, I made something similar for $10: https://www.stavros.io/posts/i-made-a-voice-note-taker/
Did you have any luck with the power issues on the new board?
The new board hasn't come yet, but a friend gave me a great idea, to power the mic from a GPIO, which powers it off completely when the ESP is off.
Hopefully the new boards will be here soon, but another issue is that I don't really have anything that can measure microamp consumption, so any testing takes days of waiting for the battery to run down :(
I do think these clones are the issue, though. They had a LED I couldn't turn off, so they'd literally shine forever. They don't seem engineered for low quiescent current, so fingers crossed with the new ones.
Makes a lot of sense :) thanks for the update.
I'll try to remember to creepy stalk you for updates as the device sounds great!
You can sign up to my mailing list to get emails if you want! It's at the end of each post.
Thank you for the blog/updates.
Since you're deep into this space, Do you use wall mount dashboards? There seems to be two camps now: Dashboards everywhere or none at all.
Like a light switch?
Or do you mean a button that activates chunked recording, passes it to a speech-to-text model, forwards to an LLM to infer intent, which triggers HA to issue a command, over a wireless network, to the computer with the light attached, to tell the light to turn on.
Rules out a bunch of cases where your hands are busy handling ingredients in the kitchen, etc
Put it at foot level and kick it.
I'm in if I can embed it into my forearm
In the mid 2000s I had a setup where some children's walkie talkie "spy watches" could be used to issue commands to a completely DIY, relay based smart home system.
I'm looking forward to whenever my Pebble ships so I can recreate that experience with this: https://github.com/skylord123/pebble-home-assistant-ws
apple watch gets you close.
Time for a real life Star Trek comm badge
For real, if I had a fully self-hosted and private system, I'd love a real life Star Trek comm badge. Being able to say, "where is Jeff?" and have it tell me alone would be quite an awesome feature
We thought so too ;) https://www.linkedin.com/posts/y-combinator_button-computer-...
I have a feeling beamforming microphone arrays might help here, something like this could improve the audio being processed substantially - https://www.minidsp.com/products/usb-audio-interface/uma-8-m....
That's a good call. I have a PS3(?) mic/camera that I was using when I was running the original Mycroft project on a Pi. I wonder if that would help with the inbuilt HA mic not waking for most of my family, most of the time. I will have to look at my VA Preview device and its specs later because I'm not sure if you can connect an external mic to it out-of-the-box.
Alexa devices have these (or used to at least), but Google Home's never did. So it shouldn't be necessary.
Yeah a small (ideally personalized) wakeword model would probably outperform just about any audio wizardry.
What's been surprising in my experience regarding the wake word is that it recognizes me (adult male) saying the wake word ~95% of the time. However, it only registers the rest of my family (women and children) ~30% of the time.
I have no firsthand knowledge, but I’d strongly bet that the home-assistant effort to donate training data is mostly get adult males, and nearly zero children.
This was 2021 (so pre-llm), but I used to work for a company that gathered data for training voice commands (Alexa, Toyota, Sonos, were some clients). Basically, we paid people to read digital assistant scripts at scale.
Your assumptions about training data do not match the demographics of data I collected. The majority of what our work revolved around was getting diversity into the training data. We specifically recruited kids, older folks, women, people with accented/dialected English and just about every variety of speech that we could get our hands on. The companies we worked with were insanely methodical about ensuring that different people were included.
You are reporting on a deliberately curated effort vs. what I understand is effectively voluntary data donation without incentives. It's not surprising to me that the later dataset ends up biased due to the differences in sourcing.
I remember when those systems first started collecting data they were worried kids wouldn't be handled - but they didn't know how to handle the privacy issuses with recording kids so discouraged it. Women being missed is not a surprise - but not anticipated.
Oh, I'm sure you're right. I've had people in my personal life (non-technical; "AI enthusiasts") laugh at me over concerns about training bias but this is likely a real world example of it.
I think you can train your own wake word with microWakeWord but I've never done it.
I thought all people's voice had to be trained, and if you didn't go through it the match % was much smaller.
With Siri this is true. I'm not positive on the others.
What about your wifi APs sensing which room you are in, with your choice of hilarious dance moves as the trigger ?
Funky chicken for Gemini
Penguin dance for OpenAI
Claude?
> Claude?
The Zoidberg Shuffle?
Wake word detection in low power DSP is a not-quite-COTS product but definitely exists. I believe PC manufacturers are looking at adding it to laptops soon, precisely to use with AI assistants.
Why not use an easier to detect wake “word”, like two claps in quick succession? Or a couple of notes of a melody?
Can't clap if your hands are full and I would not subject my family to my attempts at delivering a melody.
I haven't tried training my own wake word though, I'm tempted to see if it improves things.
Personally I'd pick "Cthulhu"
Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn
What about whistling?
Train a custom wakeword with this? https://github.com/TaterTotterson/microWakeWord-Trainer-Nvid...
or if you have an Apple Silicon Mac, this: https://github.com/TaterTotterson/microWakeWord-Trainer-Appl...
I used it personally, did a lot of research (including asking questions to the creator of microWakeWord), and submitted an upstream PR (I think it's already merged), which improved the resulting model slightly. I imagine the Nvidia version is similar, but I don't have experience with it. I also noticed that the model is so small (~25000 parameters), the actual training part doesn't even noticably improve with the GPU, only the TTS voice generation really only uses it.
if you are using this, I strongly recommend you create lots of personal samples with the recorder. I personally used 400, 200 from myself and 200 from my partner, with varying moods and in all the rooms we plan on using the assistant. I am considering re-training with more samples. it takes effort, but the resulting model seems to be well worth it.
Oh wow, 200 each? I just deployed it to the device, but I trained with 10. I'll need to redo it if it doesn't work well, but so far so good.
Why do you even need a wake word? Have a model look at full transcript and decide when to engage.
How are you using Gemini in HA?
actually the hardest part of a locally hosted voice assistant isn't the llm. it's making the tts tolerable to actually talk to every day.
the core issue is prosody: kokoro and piper are trained on read speech, but conversational responses have shorter breath groups and different stress patterns on function words. that's why numbers, addresses, and hedged phrases sound off even when everything else works.
the fix is training data composition. conversational and read speech have different prosody distributions and models don't generalize across them. for self-hosted, coqui xtts-v2 [1] is worth trying if you want more natural english output than kokoro.
btw i'm lily, cofounder of rime [2]. we're solving this for business voice agents at scale, not really the personal home assistant use case, but the underlying problem is the same.
Coqui TTS is actually deprecated, the company shut down. I have a voice assistant that is using gpt-5.4 and opus 4.6 using the subsidized plans from Codex and Claude Code, and it uses STT and TTS from mlx-audio for those portions to be locally hosted: https://github.com/Blaizzy/mlx-audio
Here are the following models I found work well:
- Qwen ASR and TTS are really good. Qwen ASR is faster than OpenAI Whisper on Apple Silicon from my tests. And the TTS model has voice cloning support so you can give it any voice you want. Qwen ASR is my default.
- Chatterbox Turbo also does voice cloning TTS and is more efficient to run than Qwen TTS. Chatterbox Turbo is my default.
- Kitten TTS is good as a small model, better than Kokoro
- Soprano TTS is surprisingly really good for a small model, but it has glitches that prevent it from being my default
But overall the mlx-audio library makes it really easy to try different models and see which ones I like.
Do you know which HA integration I would use if I want to try out Qwen 3 ASR in HA? Some screenshots in the OP reference Qwen 3 ASR for STT but I can't seem to find any reference to which integration I'd use.
Seriously for audio conversations the LLM layer is fairly stable. Getting STT and TTS to be reliable has been a much bigger hurdle.
I hear the same phrases 10+ times in a day and they stress things a bit different each time, it seems like an exceptionally hard problem. My dream of a super reliable [llm output stream -> streaming TTS endpoint -> webRTC audio stream] seems pretty much impossible at this point.
Is the goal to trick people into thinking it is a human or to create a high trust robot? I am hoping as voice agents get more sophisticated the stigma around "It's making me talk to a robot" lessens so we don't need to worry so much about convincing someone it is a real person.
I've been working on the flip side of this with ASR models, but the problem space is the same, conversational/real-world data is needed. Whisper often mistook actual words I say and hallucinate all the time when speaking technical jargon. The solution is to fine-tuning whisper with my own data. Hardest part imo was getting the actual data, which in turn got me to build listenr (https://github.com/rebreda/listenr).It's an always-on VAD-based audio dataset builder. Could be used for building conversational/real-world voice datasets for TTS models too?
After getting it working i was get motivation to actually able to build out the full fine-tuning pipeline. I wrote a little post about it all https://quickthoughts.ca/posts/listenr-asr-training-data-pro...
80% of my home voice assistant requests really need no response other than an affirmative sound effect.
100% agree. I dont want a Yes, Got it, Will do or even worse, I have turned on the Bedroom Light. I want soft success ding or a low failure boop.
Talk back is how you make sure what you asked for is what happens.
An affirmative beep but the light does not turn on means you have to guess what did.
why would you want an audio notification for a light? it either turns on and it worked or it doesnt turn on. i see no value in having a ding or anything of the kind
if i imagine constant dinging whenever i enter a room and the motion sensor toggles the light innit i'd go mad
The biggest use for me is 'guests will be here soon, turn on the lights in front of the shed where they will park', then latter when they are gone turn them off. I can't see the lights from the house and the logical place for a switch isn't in the house. Where I can see the lights a manual switch is better. I don't have most of my lights automated. The ones that are, are that way because I can't see them from where I'd want to check and control them
i thought it was specifically when using voice - ack/nack
but it might be preference... some people like clicky blue keys, some like silent red keys on their keyboard for example.
Need an ack aside from the system since the response might take a few moments, maybe a "share and enjoy" in a voice that sounds like it is smiling.
Star Trek got it right. two beeps, "Low High" = yup, "High Low" = nope
Also "High High" == affirmative.
That’s what Google Home does. “Hey, Google, good night”. Beep response then turns off the lights, brings down the blinds etc. but if something is out of whack it talks. I find it convenient.
Can you make it sound just like Titus Moody? I want to hear your voice assistant say "No sir, I don't hold with furniture that talks."
> actually the hardest part of a locally hosted voice assistant isn't the llm. it's making the tts tolerable to actually talk to every day.
I would argue that the hardest part is correctly recognizing that it's being addressed. 98% of my frustration with voice assistants is them not responding when spoken to. The other 2% is realizing I want them to stop talking.
One that I have been experimenting with is using analog phones (including rotary ones!) to act as the satellites. I live in an older home and have phone jacks in most of the rooms already so I only had to use a single analog telephone adapter. [0] The downside is I don't have wake word support, but it makes it more private and I don't find myself missing my smart speakers that much. At some point I would like to also support other types of calls on the phones, but for now I need to get an LLM hooked up to it.
[0] https://www.home-assistant.io/voice_control/worlds-most-priv...
I wish I was remotely closer to being this kind of hacker :(
I believe in you.
Do people like talking to voice assistants? I've used one occasionally (mostly for timers when I'm cooking), but most of the time it would be faster for me to just do it myself, and feels much less awkward than talking to empty air, asking it to do things for me. It might be because I just really don't like making more noise than I have to
(Yes, I appreciate that some people may be disabled in such a way that it makes sense to use voice assistants, eg motor problems)
I consider each time I need to pull out my phone and "do it myself" to be a failure of my smart home system.
If a light cannot be automatically on when I need it (like a motion sensor) or controlled with a dedicated button within arms reach (like a remote on my desk) then the third best option is one that lets me control it without interrupting what I'm doing, moving from where I am, using my hands, or possessing anything (a voice assistant).
Do you not just turn the light on when you go in a room, and turn it off again when you go out? All the rooms in my flat have switches next to the door
My lights adjust their brightness and color spectrum automatically throughout the day while also understanding the time of year and sun position. This alone is next level. All are voice/tablet controlled. When I start a movie at night, lights will adjust automatically in my open floor plan first level. All of this operates without me ever having to give any mental energy beyond the initial setup.
This is not just flip a switch territory.
How the hell are you managing that. On a simple micro level I can figure it out, but as a whole over the entire house? o.0
Home Assistant with adaptive lighting.
Many homes have a bunch of lights with their own switches, like lamps. Also there are rooms with multiple entrances, like a living room with a bedroom on the other side from the from the front door entrance, which would involve walking to the side of the room with the switch then walking back through a dark room after you turn it off. Being able to just get into bed and say "Alexa, turn off all of the lights" is way more convenient than checking 14 light switches around my home.
Yes, that would be a button within arms reach, something I explicitly prefer over the voice assistant. I use them frequently.
I don't have just one light per room though, some spaces like my workshop or living room have a lot of lighting options, and flitting around the room flipping a bunch of switches is clumsy and unnecessary. The preference is always towards automation (e.g. when I play a movie in Jellyfin, the lights dim) but there are situations where I just need to ask for the workbench light.
The Sun moves around, while I am in a room. It might be high up when I enter a room, but after a while there may be clouds or it may have set.
When watching a movie one may dimthe light. Once finished one may need more lights.
When going to bed I may want to switch all lights off. When getting up it may need some extra light.
A switch on the door is nice. More switches is better. Being able to control from anywhere may be even nicer.
So I grab my phone, open the homeassistant app, and mess with the settings on my light, or use homeassistant through my browser on my desktop. No yelling at a computer needed
I think you missed the top post "If I have to pull my phone out = failure" part.
Do you have a wife / kids? If so how do you "teach" them this?
My point being that it might be a failure to you but not them, some people don't want it.
This is my struggle, how to get the automation to do what I want without affecting everyone else equally. (And vise versa)
I use it frequently for reminders and calendar events when not at a computer, as voice is faster than the mobile interface (with so many screens) for setting something up
I guess most of my use is whilst driving, to start/stop music or audiobooks, change navigation etc. Although changing navigation through Siri is somewhat painful as it often gets my intended destination wrong lol.
I prefer voice strongly. I don't want to stop what i am doing, find a device, open the app, wait for it refresh, navigate and click to get Milk on a list. Sure you can bring this down a few steps, but all of which still require me to move, have a hand and eye free.
Not voice assistants but for anything that falls into the body of text category (emails, letters, documentaton) I just use Dragon NaturallySpeaking, mainly to give myself an RSI break from typing.
A Radiologist friend of mine convinced me to give it a try, apparently radiology reports are dictated in most places nowadays
I think the main frustration is often speed and precision but with modern dictation software it is pretty flawless.
I love it for lists- like my hands are full making something in the kitchen and I can just tell it to add things to my grocery list as soon as I notice I'm out of something.
I started designing and building a voice assistant for myself and then realized that the only time I'd find it useful would be during cooking to set timers. But a loud extractor fan would be running making the voice recognition very difficult.
An extractor fan is the kind of consistent noise that good signal processing and voice recognition ought to be able to strip out, especially if using a dispersed mic array. Even if your voice is much quieter (to your human ears) than the fan. It's a channel separation problem.
I pretty much only use them for timers and weather, and the occasional lookup for quick random info. And this is all only if I don’t have a phone handy or eg the toddler is going to timeout and I need to set his timer in the midst of him having a meltdown about it.
It’s why I haven’t and won’t enable Gemini, and I’ll likely chuck my nest minis once I’m forced to have an LLM-based experience. Hopefully they’ll be able to at least function as dumb Bluetooth speakers still but I’m not holding out hope on that end
I would, if they worked even 90%.
I mostly set timers because it’s one of the few things that always works.
I don't. I pretty much don't like talking in general, especially if I'm alone. Accordingly, no voice assistants; I don't think I've ever triggered one except accidentally.
Comment was deleted :(
I'm still waiting till the promise of voice AI that was showed during the OpenAI demo in 2024 turn real somehow. It's not clear to me, why there has been zero progress since then.
What tech can do vs applying it requires it often to be configured and packaged to be usable in that way.
It also needs to work at least 99% of the time if not more. Not easy to do this with indeterministic models.
If my lights and heat were 99% reliable, I'd be getting new lights and heat.
I took 99% reliable as meaning not having to repeat the command, which given that Siri is something like 50% reliable by that metric, 99% sounds like heaven.
In those cases yeah, 99% isn't reliable enough. I'm not going to tolerate having power down for 3 days out of the year. But in fairness, home automation is less critical than that so 99% reliability is still acceptable to me. I don't think LLMs are anywhere near that, though, nor is there any sign of them getting there any time soon. So it does concern me to use an LLM as the backbone of home automation.
Not easy, but doable, especially if it's a local model that is converting inputs into decisions and commands.
Cloud hosted models definitely can not always be consistent, but it's where I'm learning that prompt durability is a thing.
Their first version is most likely already 10x better than Siri.
> Understands when it is in a particular area and does not ask “which light?” when there is only one light in the area, but does correctly ask when there are multiple of the device type in the given area.
One of my favorite episodes:
I set 2 timers for the same thing somehow. I then tried to cancel one of them.
>“Siri, cancel the second timer”
“You have 2 timers running, would you like me to cancel one of them?”
>“Yes”
“Yes is an English rock band from the 70s…”
>“Siri, please cancel the timer with 2 minutes and 10 seconds on it”
“Would you like me to cancel the timer with 2 minutes and 8 seconds on it?”
>“Yes”
“Yes is an English rock band from the 70s…”
Eventually they both rang and she listened when I said stop.My favorite is when I ask Siri to set a timer and get back "there are no timers running."
My other favorite is when I ask Siri to set a timer on my watch and it does a web search.
My favourite is when I ask siri to stop the alarm(that is currently going off) and it decides to disable my morning wake up alarm but keep the current alarm going off.
“Siri stop”
“There’s nothing to stop”
> me, suddenly aware of how the AI takeover will happen
> "Stop" is a song by English girl group the Spice Girls from their second studio album, Spiceworld (1997).
At that point I would be very impressed if you could remember what the timers are for.
Helping my kid get ready for shower I had this exchange:
Me: "Text Jane Would you mind dropping down the robe and underpants"
Siri: Sends Jane "Would you mind dropping down"
Me: rolls eyes "Text Jane robe and underpants"
Siri: "I don't see a Jane Robe in your contacts."
Me: wishes I could drown Siri in the bathtub
It's wild to me that Apple got the ability to do the actual speech-to-text part pretty much 100% solved more than half a decade ago, yet struggles in 2026 to turn streams of very simple, correctly-transcribed text into intents in ways that even a local model can figure out. Siri is good STT, a bunch of serviceable APIs that can control lots of stuff, with the digital equivalent of a brain-damaged cat sitting at the center of it guaranteeing the worst possible experience.
It’s wild how many of you have issues with Siri - and to be clear I’m not here to discount those issues, and I very much believe all of the anecdotes here.
For me, Siri on either phone or watch is pretty much perfect - I don’t ask for much, mostly timers or making reminders.
Google’s Nest Minis though? “Lights on” has a 50/50 shot of being a song of the same name, or similar name, or totally unrelated name. Same for “lights off”. If I don’t annunciate “play rain sounds” clearly enough I get an album called “Rain Songs” that is very much NOT calming for bed time. It doesn’t help that none of these understand that if I whisper a command, it should respond quietly - honestly the siris and nests and alexas all got like one iteration and then stopped it feels like.
I want more features but less LLM. I want more control, and more predictability. Eg if every night around 1am I say “play rain sounds” my god just learn that I’m not, in all likelihood, asking to hear an album I’ve never listened to!
I bought a Home Assistant Voice Preview Edition to try out. It's surprisingly good, but still falls short when compared to Google Home speakers:
- Wake word detection isn't as good as the Google Homes (more false positives, more false negatives - so I can't just tune sensitivity).
- Mic and speakers are both of poor quality in comparison to Google Home devices.
- Flow is awkward. On a Google Home device, you can say "Okay Google, turn on the lights" with no pause. On the Voice PE, you have to say "Hey Mycroft [awkward pause while you wait for the acknowledgement noise] turn on the lights" - it seems like the Google Home devices start buffering immediately after the wake word, but the Voice PE doesn't.
- Voice fingerprints don't exist, so this prevents the device from figuring out that two separate people are talking, or who is talking to it.
- The device has poor identification of background noise, so if you talk to it while there is a TV playing speech in the background, it will continue to listen to the speech from the TV. It will eventually transcribe everything you said + everything from the TV and get confused. (This probably folds into the voice print thing as well.)
On the upside, though:
- Setting it up was really easy.
- All of the entities I want to control with it are already available, without needing to export them or set them up separately in Google Home.
- Despite all of the above complaints, the device is probably 80-90% of what I realistically need to use it day-to-day. If they throw a better speaker and mic array in, I'd likely be comfortable replacing all of my Google Homes.
> it seems like the Google Home devices start buffering immediately after the wake word, but the Voice PE doesn't.
Google Home devices are always buffering. The wake word just tells it to look back in the buffer and start processing.
I picked up the same model, including the shipping to Canada, it ended up costing a lot for what it is.
How are you hosting your LLM locally? I tried Ollama on an M4 Mac mini, even with a smaller LLM, the performance was very poor.
Great write-up. I've been going down the self-hosted rabbit hole too – started with just a VPS, ended up building monitoring and security automation around it. The moment you start self-hosting seriously you realize how much 'invisible work' managed services were doing for you.
I've recently purchased a couple of the Home Assistant Voice Preview Edition devices, and they leave a lot to be desired.
The wake word detection isn't great, and the audio quality is abysmal (for voice responses, not music).
Amazon has ruined their Alexa and Echo devices with ads and annoying nag messages.
I'd really like an open alternative, but the basics are lacking right now.
Can those devices (Amazon) be _jail broken_? I was just wondering that this morning while taking a shower.
Generally no. Big tech companies have gotten good at locking down devices to the boot loader. Some of the signing keys for certain OTA versions have leaked, but you can’t rely on that.
Some of the devices contain browsers, and people have set up hacky ways to turn them into thin clients through that, but it’s not particularly reliable IME.
I heard some Chinese brands which made similar hardware for Chinese consumers don’t lock their devices down, letting you flash an open install of Android on them, but I haven’t seen anyone try that IRL.
Youtube is trying to push me to watch a video about jail breaking the Echo Show for a week now. I didn't watch it, but it's probably easy to find.
> llama.cpp (previously Ollama)
I almost fainted
I’m keen to see if Nabu Casa release an update to the Voice Assist hardware sometime soon. Something with the same fidelity and finish of the Amazon and Google options but open would be fantastic.
The best fix I've made to any voice-mode AI is giving it a "done" word. So it has to listen for "pineapple" before it's allowed to process what I said. Just like radio comms (over and out).
I think you will get tired of saying pineapple.
Is there a locally hosted voice assistant for Android phones? One available through F-Droid, if possible.
There are a few. I'm currently using Transcribro.
This is five months old now. Any substantial changes to the recommended setup?
It looks like the author has kept it updated since then.
They mention the "Qwen3.5 (35B)" model for example which was released around 2 weeks ago.
For some anecdata, I've set up Qwen3.5 on a RX 7900XTX last weekend. It runs fine, did some simple coding prompts and got responses in 15-30 seconds. It's my first foray into running models locally just to see what's possible, and I guess I'm happily surprised so far.
Also, the entire setup was done through Codex. I asked Codex to figure out how to run models locally given my architecture (Ubuntu, AMD GPU). It told me which steps to apply and I hit zero snags.
the tts thing is a legit pain, right? i tried a few different voices and they all sounded so robotic. kokoro is interesting, i'll have to check that out.
I've been having a lot of fun using my old Mycroft AI device. Neon is the new software package. It didn't solve the issues highlighted in this thread, but it is a fun open device to hack on. I wrote a little web app that will speak in the standard voice and say things like "hey kids, I'm AI and know everything, and your dad is really cool." They love to yell at me when I do that.
nice i run one dictatorflow.com that i open sourced lee101/voicetype
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
Crafted by Rajat
Source Code