Skip to content

Executable Reqmts

Executable Requirements

💡 TL;DR - Requirements-Driven Iterative Development

Executable Requirements uses the spec as direct AI input to produce a runnable project — not a one-shot artifact, but a governed starting point your team iterates from.

  • Any format: structured prose, numbered lists, Gherkin — whatever your team already writes
  • PM-driven workflow: Product Manager prepares a requirements/ folder (logic, message formats, acceptance tests); Dev drops it in the project and types implement reqs <name>
  • AI produces a runnable project and writes back an audit trail (ad-libs.md) of every decision — red flags for review, FYIs for standard patterns
  • Iterative by design: update requirements.md, re-run — each cycle tightens the spec; declarative rules make logic changes safe (automatic ordering and reuse, no cascade of procedural updates)
  • Governance is architectural: rules live on the data, not the path — every new API, agent, or integration inherits them automatically

 

What It Is

Traditional requirements are a handoff artifact: a document a developer reads, interprets, and then implements. Interpretation introduces drift — requirements that describe intent, code that approximates it.

Executable Requirements treats the spec as direct AI input. AI reads the requirements file and produces a runnable project — Python, your IDE, your source control. That project is the executable artifact, not the prompt. The spec and the implementation stay coupled through iteration: update the requirements, re-run, review the audit trail, refine. Declarative rules make that loop practical — when logic changes, you update the rule; ordering and reuse are automatic.

 

Requirement Format: Whatever You Already Write

There is no required format. The spec is whatever your team already produces — prose, numbered lists, Gherkin. The key is structure: clear sections for logic, integrations, and acceptance criteria.

Numbered prose (the simplest form — see samples/prompts/genai_demo.prompt):

Create a system with customers, orders, items and products.

On Placing Orders, Check Credit
    1. The Customer's balance is less than the credit limit
    2. The Customer's balance is the sum of the Order amount_total where date_shipped is null
    3. The Order's amount_total is the sum of the Item amount
    4. The Item amount is the quantity * unit_price
    5. The Item unit_price is copied from the Product unit_price

Use case: App Integration
    1. Publish the Order to Kafka topic 'order_shipping' if the date_shipped is not None.

Gherkin — for teams that already use BDD-style specs (see samples/requirements/Order-EAI/requirements.md):

Feature: Check Credit

  Scenario: Place an order
    Given a customer with a credit limit
    When an order is placed
    Then copy the price from the product
    And multiply by quantity to get the item amount
    And sum item amounts to get the order total
    And sum unpaid order totals to get the customer balance
    And reject if balance exceeds the credit limit

Both formats produce the same output: declarative rules enforced on every path, a generated test suite, and an Admin app — from a single implement reqs prompt.

 

Workflow: PM Prepares, Dev Executes

A natural division of labor emerges from the structure:

Who Does what
Product Manager Gathers artifacts: DDL, sample messages, architecture notes — in cloud storage, SharePoint, wherever they work
Product Manager Writes requirements.md — sections for logic, integrations, acceptance; includes message_formats/ sub-folder
Developer Creates docs/requirements/<name>/ in the project repo, drops in requirements.md and supporting files
Developer Types implement reqs <name> in Copilot Agent mode
AI Builds the system, writes docs/requirements/<name>/ad-libs.md with decisions made
PM + Dev Reviews ad-libs.md — 🔴 items require confirmation, 🟡 are standard patterns; update requirements.md and re-run

This is the starting point for iterative development, not a one-shot deployment. Each cycle produces a working system your team owns and refines.

What belongs in requirements.md:

  • What to build — tables, handlers, APIs, logic rules
  • Message formats — reference files in message_formats/; include field mappings where non-obvious
  • Phases — what's in scope now vs. deferred
  • Acceptance — how to verify it worked (test commands, expected DB state)

Leave out: implementation details, file names, framework choices — let AI decide those and review the audit trail to see what it chose.

 

EAI: By-Example Integrations

For messaging integrations the requirements spec uses a by-example approach: include a sample JSON message alongside the spec, and AI auto-maps obvious fields silently — you only specify exceptions.

For example, message_formats/order_b2b.json:

{
  "Account": "Alice",
  "Notes": "Kafka order from sales",
  "Items": [
    { "Name": "Widget",  "QuantityOrdered": 1 },
    { "Name": "Gadget",  "QuantityOrdered": 2 }
  ]
}

The corresponding requirements section names the exceptions — fields that rename, join, or map to child collections — and AI infers the rest:

Feature: B2B Order Integration

  Scenario: Accept order from external partner
    Given an inbound B2B order in partner format (message_formats/order_b2b.json)
    When the order is received via a Custom API endpoint named OrderB2B
    Then map Account to Customer by name
    And map Items.Name to Product by name
    And map Items.QuantityOrdered to Item.quantity
    And create the order with all Check Credit rules enforced

An _unresolved guard blocks server start on any field AI can't confidently map — no silent failures.

The same by-example pattern applies to outbound Kafka publish: describe the desired JSON shape, AI matches fields from the model, adds # TODO on uncertain ones, and generates the publish rule.

For full details on mapping patterns, the two-message pattern, and FIELD_EXCEPTIONS, see Integration EAI and Integration Kafka.

 

Human in the Loop: Dev Stays in Control

AI does the initial build — but the developer reviews, owns, and iterates on everything it produces. There are two review surfaces, one for each kind of output:

Logic → Declarative Rules. Business logic in the spec becomes Python rules in logic/logic_discovery/. The rules are short, readable, and directly traceable to the spec — the rule is the requirement, restated with precision. Dev reviews them in the IDE, adjusts as needed, and re-runs the suite. When requirements change, you update the rule; automatic ordering and reuse handle the rest. No cascade of procedural updates to track down.

Message and API mappings → ad-libs.md. Field mappings, Kafka patterns, lookup strategies, and other integration decisions AI had to fill in are reported in docs/requirements/<name>/ad-libs.md. Zero ad-libs means the spec was complete and unambiguous. The same review-and-refine loop applies: read the report, tighten the spec, re-run.

Two severity tiers:

Tier Meaning
🔴 Review Required AI made a decision that could be wrong — specific action called out
🟡 FYI Standard pattern applied — no action needed, recorded for transparency

Example from the Order-EAI sample:

🔴  OrderB2BMapper.py — parent_lookups tuple shape may not match what
    RowDictMapper._parent_lookup_from_child() expects. Test with a POST
    to /api/OrderB2B. If you get a NOT NULL error, adjust the tuple shape.

🟡  check_credit.py — standard Check-Credit rules (copy, formula, sum,
    sum-with-where, constraint). Null-safe guard applied to constraint.

🟡  order_b2b.py — 2-message Kafka pattern applied (blob saved in Tx 1,
    parsed in Tx 2). Required pattern per eai_subscribe.md.

The loop: review ad-libs.md, update requirements.md to resolve any 🔴 items, re-run. Each cycle reduces the number of ad-libs until the spec and the implementation are in full agreement.

 

Try It — Order-EAI in Under 10 Minutes

The Manager ships a ready-to-run sample: samples/requirements/Order-EAI/ — B2B order intake via both a custom REST endpoint and Kafka, with outbound shipping notification and full Check Credit logic.

This is the same system shown in Sample Basic EAI, which walks through the same project as a step-by-step tutorial focused on the EAI patterns.

Step 1 — Create the project (in the Manager terminal):

genai-logic create --project_name=demo_eai_exec_reqmts --db_url=sqlite:///samples/dbs/basic_demo.sqlite

Open the created project in VS Code.

Step 2 — Copy the requirements set (from a terminal inside the created project):

cp -r ../samples/requirements/Order-EAI  docs/requirements/Order-EAI

docs/requirements/ already exists in every created project.

Step 3 — Load context, then run in Copilot Agent mode (not Ask):

Please load `.github/.copilot-instructions.md`.

Then:

implement reqs Order-EAI

AI reads docs/requirements/Order-EAI/requirements.md, builds the system, and writes docs/requirements/Order-EAI/ad-libs.md.

Step 4 — Review the audit trail:

  • 🔴 Review Required — decisions that need your confirmation
  • 🟡 FYI — standard patterns applied, no action needed

Update requirements.md to clarify anything flagged red, then re-run.

Step 5 — Verify (no Kafka required — use the consume_debug endpoint):

curl 'http://localhost:5656/consume_debug/order_b2b?file=docs/requirements/Order-EAI/message_formats/order_b2b.json'

sqlite3 database/db.sqlite "SELECT * FROM order_b2b_message; SELECT * FROM 'order'; SELECT * FROM item;"

 

Deliverables

From one requirements file, AI delivers:

  • Standard JSON:API — filtering, sorting, pagination, optimistic locking
  • Admin appmulti-table, automatic joins, ready on day one
  • Declarative rules — enforced on every path, at commit, automatically ordered and reused
  • B2B API and Kafka integration — raw message persisted first, parse failures recoverable, nothing lost
  • Behave test suite — generated from the rules, not written by hand
  • Logic Report — requirement → rule → execution trace, readable by developers, business users, and auditors
  • ad-libs.md audit trail — AI's decisions, reviewable and iterable
  • Standard project — Python, your IDE, your source control, container-ready

Logic Report

 

How the Rules Engine Works

Governance Architecture

NL intent goes in on the left. Context Engineering directs AI to produce Data Rules — not procedural code. Those rules load into the Rules Engine at startup; dependencies are computed deterministically, not inferred at runtime. The Commit Listener hooks into the ORM. Every transaction — API, agent, workflow, message — passes through one control point.

Because the rules are on the data, not the path, every access path inherits them automatically. Delete an order, ship an order, have an agent update a quantity — none of those need to be anticipated in the spec. A new endpoint or agent added later requires no additional logic.

See Logic Operation for details on rule ordering, chaining, and pruning.