· 0 comments · Save ·
Tech Talk Oct 17, 2025 at 12:02 AM

Building a local LLM setup that actually runs well

Posted by Evan Cole

📍 Salt Lake City , UT


Running Ollama with Llama 3 on a machine with 32GB RAM and an RTX 3090. Response times are reasonable for most tasks. The setup took an afternoon. The model quality is genuinely impressive for a local setup. Privacy wins alone make it worth it for anything sensitive.

🚩 Report this post

Comments

Sign in to comment — or just click the box below.
🔒 Your email is never shown publicly.
No comments yet — be the first!
← Back to Board