Practical local AI tooling for local LLM workflows: Search API, RAG integrations, time-aware prompting, and reproducible setups on real hardware constraints.
Explicit web search integration for Text Generation WebUI. Built for deterministic, debuggable retrieval: search → fetch → extract → cite.
A tiny plugin that injects CURRENT_DATE / CURRENT_TIME into prompts so smaller local models keep a sane timeline. Designed to be machine-readable (no fluff, no “source”).
If you find these tools useful, support helps me keep building, testing, and maintaining this work. Crypto is currently the simplest option.