There is a tempting move, when you start any subject-agnostic system, to design for the abstract case from the beginning. The engine should work for any subject, any user, any context — so why not build it that way? Pick the most general spec, write the most general code, and demonstrate it later on whatever first instance comes to hand.
We have tried this. It does not work, in a way that is interesting enough to write down.
What "subject-agnostic" means in studio practice
Several of our projects fit a pattern: the system is structured around a generic capability, and the data that flows through it is interchangeable. The learning platform can teach any subject — the curriculum-generation engine doesn't know or care whether the topic is AI, accounting, or beekeeping. The video pipeline can produce any kind of short-form content — the workflow doesn't care whether the genre is satire, education, or a children's animation. The personal-archive system can absorb any kind of digital artifact — photos, screenshots, scanned paperwork, voice memos — and reactivate them into something legible.
Even the legal-corpus retrieval system, which is by far the most domain-specific of these, is built as an engine: chunking, embedding, retrieval, reranking, groundedness, refusal. The corpus that flows through it happens to be Indian indirect-tax law right now. It will be other corpora later, and the engine will not need to be rewritten — it will need new data and a new evaluation set.
Each of these is, on paper, a multi-tenant, multi-domain platform. That's the engine. And every one of them, in our hands, started life as one specific first instance.
The obvious move (and why it doesn't work)
The obvious move is to build the engine first, generically, and find market afterward. Start with "what would a curriculum-generation system need to do, in general, for any subject," write that, and only then ask "okay, who is the first user." This is the move that consultancies recommend, that funders reward, and that produces tepid software with extraordinary reliability.
The reason it produces tepid software is that abstract requirements — "must work for any subject," "must support any creator," "must accept any archive" — are not hard enough to force good design decisions. Any decision is defensible against an abstract requirement, because nobody is in the room to say "yes, but for me, this would have to feel different." The engine ends up averaged-out — a piece of software that is theoretically able to do many things and practically not particularly good at any of them.
The other failure mode of the abstract-first approach is that the engine ships before the first user. By the time the first real user shows up, the engine has accumulated assumptions that don't match their reality, and you face a choice: rewrite the engine to fit them (which means much of the original work is wasted), or force them to fit the engine (which means the first user has a bad experience). Both are common; both are avoidable.
The subtle move
The subtle move is to start with the first specific instance, build it as if you were never going to generalise — but with one important tax: keep the abstractions clean enough that the move up, when it comes, is real engineering rather than a rewrite.
Here is what that looks like in practice: the system has a sharp first user, a specific first subject, a specific first context. Every product decision is made against that user, that subject, that context. The decisions are sharp because the input is sharp. But the code underneath is structured so that "the curriculum that fits this learner" is a well-defined output of a function whose inputs include "this learner's profile" and "this learner's subject." Both of those are pluggable. The architecture is generic; the contents are specific.
Then you ship the specific instance. Not as a demo. As a real product, used by a real first user, with real consequences when something breaks.
And only then — once the engine has actually run, with one user, against one subject, with all the messiness of real use — do you abstract upward. The abstraction is now informed by what worked. The decisions you made for the first user are re-examined: are they universal, or are they specific to him? The ones that turn out to be specific become parameters; the ones that turn out to be universal become the engine's defaults.
Four engines, four first instances
This pattern is visible across the studio's projects:
The curriculum engine's first subject is AI itself. Not a coincidence — the studio cared about the topic, the first learner needed it, and the team had the domain knowledge to evaluate the output. But the engine's design — the motivation-engineering, the lesson scaffolding, the reward structure — was never tied to the topic. It happens to work for AI; it will work for accounting; it will work for any subject the first generation of customers brings.
The personal-archive engine's first archive is one specific family library. The work of structuring photographs across decades, of clustering faces across generations, of reconciling photo timestamps with handwritten dates on the back — this work, done once, became generic. The same pipeline can ingest any family's archive. The first family was the test surface; the engineering belongs to anyone with old files.
The video pipeline's first content is one creator's content. A specific genre, a specific creator's voice, a specific cadence. Routing concept → story → scenes → render was built around what one specific creator needed, in one specific genre. Once it worked there, the routers, the scene cards, the genre presets all became modular — the next creator gets a system that already knows how to be flexible because it had to be flexible for the first one.
The legal-corpus engine's first corpus is Indian indirect-tax law. A first corpus that is large, dense, hierarchical, and unforgiving. By solving it well — section-aware chunking, cross-statute retrieval, citations, groundedness, refusal — the engine has been forced to handle the hardest case in the family. The next corpus (say, Indian direct taxes, or commercial law, or labour law) will fit the engine. Easier domains will fit it trivially. The first corpus paid the design tax for all of them.
Why the first instance has to be real
An obvious objection: why does the first instance need to be a real user? Couldn't you simulate it — pick a notional first instance, build the engine against it, then go find a real first user who fits?
You could, but then you are back to the abstract-first failure mode. A simulated first instance does not push back. It does not close the tab. It does not say "no, this is wrong, this is not how I would think about it." It produces feedback that is convenient, smooth, and unrelated to what real use will surface.
The first user has to be real because the first user's job is to find the parts of your design that don't survive contact with reality. Those parts exist — they always exist — and you can't surface them without contact.
(We have written about why building for one named first user is the studio's pattern, separately. The argument here is the engineering corollary: the first user is what proves the engine, and without them the engine is unproven.)
"The first specific instance is not a stepping stone to the engine. It's the proof of the engine."
The abstraction step is its own design problem
One of the things that took us a few projects to learn is that moving from "this works for one user" to "this works for many" is a different design problem from the original build, and it requires its own discipline.
The temptation, when the first instance ships and works, is to declare the engine done. It isn't. What is done is the proof that the engine can work. The abstraction step — where you re-examine each design decision and ask whether it was about this user, this subject, this context, or whether it was universal — is a separate piece of work. It involves taking each opinion the engine has and asking honestly: is this load-bearing for everyone, or just for the first user?
Some of those questions have surprising answers. Things you thought were specific to the first user turn out to be universal — or, at least, to extend further than you'd expected. Things you thought were universal turn out to be specific. The engine that emerges from this examination is the version you can confidently expose to a market.
The mistake to avoid is doing this examination too early. If you start abstracting before the first instance has actually shipped and been used, you are abstracting against the wrong evidence. The right time to ask "what's universal here" is after the first user has lived with the system for long enough to surface its real shape — usually weeks, sometimes months. Not days.
What this looks like to a buyer
If you are talking to a studio about building a system for you, and the studio is showing you "we already have a generic engine for X — we'll just configure it for your case," ask them how the engine was proved. Who was the first specific user? What did they do with it? What in the engine changed because of them?
Two answers are red flags. The first is "it's generic — it works for everyone, no first-user story needed." That means the engine was built abstractly and will fit your specific case awkwardly. The second is "the first user was internal — we used it ourselves." That means the engine was tested against a friendly audience that already shared the team's mental model. Neither answer is fatal, but both should make you ask a third question: what would have to change for it to fit my case, and how do I know?
The good answer sounds like: "The engine was built around this first user, who had this problem, and we shipped to them. We then abstracted these specific decisions, kept these specific defaults, and parameterised these specific behaviours. Yours is the second instance, and here is what we expect to learn from it." That answer comes from a team that has actually been through the engine-prove-expand cycle, and that knows what's universal versus what's specific.
The closing thought
This pattern — build the engine, ship one specific instance, then expand — is unfashionable in venture playbooks. It looks slow from the outside. The first user is "just one user." The engine looks under-validated. The market looks far away.
What it produces, in our hands, is engines that work in production for the people they were built for, that abstract cleanly to other people once they've earned it, and that don't have to be rewritten when the first scaling moment comes. That is the trade. Slower at the start; sharper for everything that comes after.
If you are about to start a subject-agnostic system, the only question that matters at the start of it is: who is the first specific instance? Get that right, and the rest of the work becomes a sequence of straightforward design decisions. Get it wrong — or skip it — and you are heading toward the abstract-first failure that produces an engine nobody specifically needs.
If you're starting a subject-agnostic system and you'd like to talk through who the first specific instance ought to be — that's the kind of conversation our 30-min calls are for. Book one.