The modern inbox is frequently a wasteland of unanswered queries, automated noise, and chaotic requests that require three follow-up emails just to establish the basic facts. While corporate giants throw massive, energy-hungry models at this problem, there is a far more elegant, “old-school meets new-school” solution brewing on the command line. By combining the rock-solid reliability of Unix mail tools with the nuanced reasoning of a 14b parameter Large Language Model, it is possible to build a digital front-desk that does not just acknowledge receipt, but actually thinks.
Continue reading The Gatekeeper in the Machine: Taming the Inbox with Local LLMsTag Archives: llm
Beyond the Dashboard: How a 14B LLM Brought Real Intelligence to My Server Monitoring
If you manage Linux servers, you already know the morning ritual. You sip your coffee and stare at your monitoring dashboards. Grafana, Zabbix, Datadog – pick your poison. They are excellent at showing you lines on a graph, but let’s be honest; traditional monitoring is fundamentally “dumb.”
Standard monitoring relies on rigid thresholds. If CPU usage hits 95%, you get an alert. But what if that 95% CPU usage is just the scheduled weekly backup running alongside a routine malware scan? Your dashboard doesn’t care. It fires off an alert anyway, contributing to the slow, inevitable creep of alert fatigue.
I wanted something better. I didn’t just want monitoring; I wanted monitoring plus intelligence.
Continue reading Beyond the Dashboard: How a 14B LLM Brought Real Intelligence to My Server MonitoringFedora 42 Meets CUDA 12.9: The Quest to Build vllm (InstructLab)
Over the past couple of weeks I’ve been wrestling with building vllm (with CUDA support) under Fedora 42. Here’s the short version of what went wrong:-
Continue reading Fedora 42 Meets CUDA 12.9: The Quest to Build vllm (InstructLab)