<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>DigiCrafter — Field Notes</title>
    <link>https://digicrafter.ai/blog.html</link>
    <atom:link href="https://digicrafter.ai/rss.xml" rel="self" type="application/rss+xml"/>
    <description>Long-form field notes from the DigiCrafter studio. Production lessons, dead ends, and the occasional opinion.</description>
    <language>en-us</language>
    <lastBuildDate>Wed, 22 Apr 2026 10:00:00 +0000</lastBuildDate>

    <item>
      <title>Running a 7-GPU Vulkan inference fleet on AMD silicon — and why</title>
      <link>https://digicrafter.ai/blog-7gpu-vulkan-fleet.html</link>
      <guid isPermaLink="true">https://digicrafter.ai/blog-7gpu-vulkan-fleet.html</guid>
      <pubDate>Wed, 22 Apr 2026 10:00:00 +0000</pubDate>
      <category>Infrastructure</category>
      <description>We built a local AI inference rig out of used mining hardware and consumer AMD cards. Total spend: about $2,000. Here's why we chose AMD, why Vulkan over ROCm, what broke, and the one lesson we kept rediscovering until we finally wrote it down.</description>
    </item>

    <item>
      <title>Section-aware chunking for legal documents: what we got wrong</title>
      <link>https://digicrafter.ai/blog-section-aware-chunking.html</link>
      <guid isPermaLink="true">https://digicrafter.ai/blog-section-aware-chunking.html</guid>
      <pubDate>Sat, 18 Apr 2026 10:00:00 +0000</pubDate>
      <category>Retrieval</category>
      <description>For three weeks our recall on a 50-document GST validation set was stuck at 0.51. Naive recursive chunking does not work on tax law. The fix wasn't a fancier embedder — it was admitting that the chunker had to understand legal structure before it tokenized anything.</description>
    </item>

    <item>
      <title>Why llama-cpp Vulkan beat Ollama at ingest scale (in our setup)</title>
      <link>https://digicrafter.ai/blog-llama-cpp-vs-ollama.html</link>
      <guid isPermaLink="true">https://digicrafter.ai/blog-llama-cpp-vs-ollama.html</guid>
      <pubDate>Sat, 28 Mar 2026 10:00:00 +0000</pubDate>
      <category>Pragmatism</category>
      <description>Ollama is great for desktop chat. It is not great for ingesting 14,000 PDFs at six chunks per second per GPU. Here's the short version, including the table of "what we tried vs. what failed" so you don't have to redo it.</description>
    </item>

    <item>
      <title>Don't trust the cloud: when local-first AI is the right answer</title>
      <link>https://digicrafter.ai/blog-local-first-ai.html</link>
      <guid isPermaLink="true">https://digicrafter.ai/blog-local-first-ai.html</guid>
      <pubDate>Sat, 14 Mar 2026 10:00:00 +0000</pubDate>
      <category>Manifesto</category>
      <description>Most AI consultancies build you a wrapper around an OpenAI key. Six months in, you're locked in, your data lives in their logs, and your costs scale with every user you add. Here's when local-first is the right call, when it isn't, and the cost math that surprised us.</description>
    </item>
  </channel>
</rss>
