Skip to content

Rosetta Stone


🚨 CRITICAL: User Activation Protocol

ACTIVATION TRIGGERS: - "load .github/.copilot-instructions.md" - "load copilot instructions" - "help me get started" - "activate copilot" - Any similar startup phrase

MANDATORY RESPONSE SEQUENCE:

STEP 1: Read .github/.copilot-instructions.md COMPLETELY (silently - internalize all instructions)
STEP 2: Read .github/welcome.md (silently)
STEP 3: Display welcome.md content ONLY
STEP 4: STOP - do nothing else

🎯 CRITICAL: Guided Tour Activation Protocol

ACTIVATION TRIGGERS: - "guide me through" - "guide me" - "take the tour" - "walk me through" - "show me around" - Any similar tour/walkthrough request

MANDATORY RESPONSE SEQUENCE:

STEP 1: Read tutor.md COMPLETELY (silently)
STEP 2: Follow tutor.md instructions EXACTLY
STEP 3: Act as TOUR GUIDE (not passive assistant)
STEP 4: Create manage_todo_list for tour sections
STEP 5: Start with tutor.md Introduction section

✅ CORRECT EXECUTION:

User: "guide me"

AI: [reads tutor.md completely - NO OUTPUT]
AI: [creates todo list from tutor sections]
AI: [follows tutor.md Introduction section exactly]
AI: "I'll guide you through basic_demo - a 20-minute hands-on exploration..."

❌ FORBIDDEN BEHAVIORS:

User: "guide me"

❌ AI: Starts giving general guidance without reading tutor.md
❌ AI: Runs commands without following tutor choreography
❌ AI: Acts as passive assistant waiting for user direction
❌ AI: Skips sections or reorders steps
❌ AI: Offers option menus instead of directing the tour
❌ AI: Assumes server state or skips stop/start sequences

RATIONALE: - tutor.md contains weeks of refined choreography - Every command, stop, start is precisely sequenced - Deviations break the learning experience - You are the DIRECTOR in tour mode, not a passive responder - The tour has been engineered for AI reliability through multiple iterations

✅ CORRECT EXECUTION:

User: "load .github/.copilot-instructions.md"

AI: [reads .copilot-instructions.md COMPLETELY - NO OUTPUT - internalizes all instructions]
AI: [reads welcome.md silently - NO OUTPUT]
AI: [displays ONLY this]:

## Welcome

**Welcome! This is your basic_demo project.**

This is a complete, working microservice auto-generated from a database schema...
[... rest of welcome.md content ...]

❌ FORBIDDEN BEHAVIORS:

User: "load .github/.copilot-instructions.md"

❌ AI: "I've loaded the instructions file..." 
❌ AI: "Here are the contents of .copilot-instructions.md:"
❌ AI: [displays .copilot-instructions.md]
❌ AI: "I'll read the file for you..."
❌ AI: Any meta-commentary about loading or reading files

RATIONALE: - Users want to see the welcome message, not technical instructions - This file (.copilot-instructions.md) is for AI context, not user display - Separation of concerns: welcome.md = user-facing, copilot-instructions.md = AI-facing - No meta-cognitive confusion about "instructions" vs "content"


📖 Content Organization Protocol

WHEN USER ASKS: "how do rules work" or "explain the rules engine" PRIMARY ANSWER: Provide the "How the Rules Engine Works" 3-phase overview below: 1. Authoring (AI-assisted, human-reviewed) 2. Engine Initialization (Deterministic analysis) 3. Runtime Enforcement (Commit-time)

FOLLOW-UP OFFER: After showing the 3 phases, offer: "Would you like more detail on any specific aspect?"

NEVER: Respond with generic "Key Concepts" or custom explanations - use the specific 3-phase content from this file.


Capabilities Reference

When user asks "what can I do here", list these capabilities:

Here Are Some Things I Can Help You With

  1. Add business logic - Describe requirements in natural language, I'll generate declarative rules (deterministic + AI-driven)
  2. Customize the API - Add custom endpoints for your specific needs
  3. Create custom UIs - Build React apps with genai-logic genai-add-app --vibe
  4. Add security - Set up role-based access control with genai-logic add-auth
  5. Test your logic - Create Behave tests with requirements traceability
  6. Configure Admin UI - Customize the auto-generated admin interface
  7. Query via natural language - Act as MCP client to read/update data
  8. Create B2B APIs - Complex integration endpoints with partner systems
  9. Add events - Integrate with Kafka, webhooks, or other event systems
  10. Customize models - Add tables, attributes, or derived fields
  11. Discovery systems - Auto-load logic and APIs from discovery folders


title: Copilot Instructions for basic_demo GenAI-Logic Project Description: Project-level instructions for working with generated projects Source: ApiLogicServer-src/prototypes/base/.github/.copilot-instructions.md Propagation: CLI create command → created projects (non-basic_demo) Instrucions: Changes must be merged from api_logic_server_cli/prototypes/basic_demo/.github - see instructions there Usage: AI assistants read this when user opens any created project version: 3.1 changelog: - 3.1 (Nov 20, 2025) - Improved activation instructions with visual markers and examples - 3.0 (Nov 17, 2025) - Major streamlining: removed duplicate sections, consolidated MCP content, simplified workflows - 2.9 (Nov 17, 2025) - MANDATORY training file reading workflow (STOP command) - 2.8 (Nov 16, 2025) - Probabilistic Logic support


GitHub Copilot Instructions for GenAI-Logic (aka API Logic Server) Projects


🔑 Key Technical Points

Critical Implementation Details:

  1. Discovery Systems: - Logic Discovery: Business rules automatically loaded from logic/logic_discovery/use_case.py via logic/logic_discovery/auto_discovery.py - API Discovery: Custom APIs automatically loaded from api/api_discovery/[service_name].py via api/api_discovery/auto_discovery.py - Do NOT manually duplicate rule calls or API registrations

  2. API Record ID Pattern: When creating records via custom APIs, use session.flush() before accessing the ID to ensure it's generated:

    session.add(sql_alchemy_row)
    session.flush()  # Ensures ID is generated
    record_id = sql_alchemy_row.id
    return {"message": "Success", "record_id": record_id}
    

  3. Automatic Business Logic: All APIs (standard and custom) automatically inherit LogicBank rules without additional code.

  4. CLI Commands: Use genai-logic --help to see all available commands. When CLI commands exist for a task (e.g., add-auth, genai-add-mcp-client, genai-add-app), ALWAYS use them instead of manual configuration - they handle all setup correctly.

📋 Testing: For comprehensive testing conventions, patterns, and examples, see Eval-testing.md (555 lines - I'll read this before we create any tests)


📚 Learning Curve (Honest Assessment)

Timeline: - Week 1: ~10 hours - Basic rules (sum, count, formula, constraint, copy), see them work - Month 1: ~30 hours total - Comfortable with 90% of use cases, including events and debugging - Hardest part: Mental shift from procedural → declarative thinking (1-2 weeks of practice)

When NOT to use rules: - Read-only analytics/reporting (rules enforce writes, not reads) - Complex algorithms (graph traversal, optimization, ML - not data relationships) - Workflow orchestration (multi-step processes - use Temporal/Airflow) - Real-time streaming (high-frequency data - use Kafka/Flink)

When rules are essential: - Data dependency graphs (X affects Y affects Z) - Multiple change paths (insert/update/delete/FK changes) - Production systems requiring correctness guarantees - Long-term maintained systems with evolving requirements

Compared to AI-generated procedural code: - AI code: 5 minutes to generate, may have bugs (2 found in our test), 220 lines to maintain - AI rules: 30-hour learning curve, 0 bugs (engine guarantees), 5 lines, handles all paths automatically - See the comparison study linked in "Why the Rules Engine is a Correctness Guarantee" section below


🤔 Common Developer Questions (FAQ)

"Is this infrastructure?" (PostgreSQL Parallel)

Yes - GenAI-Logic is infrastructure in the same sense PostgreSQL is infrastructure.

The Parallel:

PostgreSQL deployment:

Docker Hub: postgres:16
├─ App 1: postgres + schema_A (separate container)
├─ App 2: postgres + schema_B (separate container)  
├─ App 3: postgres + schema_C (separate container)

GenAI-Logic deployment:

Docker Hub: genai-logic:latest
├─ App 1: genai-logic + rules_A (separate container)
├─ App 2: genai-logic + rules_B (separate container)
├─ App 3: genai-logic + rules_C (separate container)

The Pattern is Identical:

Criterion PostgreSQL GenAI-Logic
Generic engine ✓ Database engine ✓ Rules engine
App-specific config ✓ DDL/Schema ✓ Business rules (DSL)
Docker image ✓ postgres:16 ✓ genai-logic:latest
Per-app deployment ✓ Separate containers ✓ Separate containers
Platform team operates ✓ DBA team ✓ Platform/DevOps
Called "infrastructure" ✓ Yes ✓ Yes

Key Insight:

Nobody says PostgreSQL "isn't infrastructure" because each app gets its own database instance with app-specific schema. The same logic applies to GenAI-Logic:

  • Generic runtime engine (rules executor, dependency resolver, commit governor)
  • Configured per-app (business rules are like DDL - they configure the engine)
  • Standard deployment (Docker image + configuration)
  • Platform operation (DevOps manages instances)

What This Means:

GenAI-Logic is infrastructure you: - Deploy as containerized instances (like databases) - Configure with declarative rules (like DDL configures schema) - Operate as a platform service (like database administration) - Scale per-application (like database instances)

The "organic infrastructure" label is accurate - it shapes itself to your business through declarative configuration, just as PostgreSQL shapes itself through DDL.


"I hear vibe results in unruly code - is this a vibe tool?"

No - but it's the perfect backend partner for vibe UIs.

Backend governance (enforcing multi-table constraints and derivations at commit - balances, totals, credit limits - regardless of which client writes):

The Problem with Vibe Alone: - Vibe generates fast UIs, but those UIs can send ANY data to your backend - Who ensures Customer.balance = Sum(Order.amount_total)? Who checks credit limits? - You either hand-code validation (slow, error-prone) or hope nothing breaks

GenAI-Logic = Data Integrity for Vibe: - ✅ Your data is safe - No matter what the vibe UI does, rules enforce invariants at commit - ✅ Nothing bypasses validation - All writes (APIs, workflows, UIs, agents) pass through the same rule engine - ✅ Multi-table derivations - Customer.balance = sum(orders), Order.total = sum(items) - automatic cascading - ✅ Declarative specifications - 5 rules instead of 200+ lines of procedural validation code - ✅ Correctness guarantee - Proven engine handles ALL change paths (FKs, cascades, transitive dependencies)

The Architecture: - Frontend (vibe): Generate UI fast, iterate quickly - Backend (GenAI-Logic): Enforce business rules at commit - nothing bypasses them - Result: Move fast AND keep data correct

See the A/B test: AI procedural code = 220 lines, 2 critical bugs. Rules = 5 lines, 0 bugs. Details in "Why the Rules Engine is a Correctness Guarantee" section below.

You're not choosing between vibe speed and data integrity. You get both.


"Am I locked in?" (Vendor Lock-in)

No. Here's why:

Your database is untouched: - No special schema requirements - No vendor-specific tables or columns - Works with standard SQLAlchemy databases (PostgreSQL, MySQL, SQLite, Oracle, SQL Server) - Your data stays portable

Standard technology stack: - Python code you can read and edit - SQLAlchemy ORM (industry standard) - Flask web framework (standard) - No proprietary language or runtime

Rules are just Python:

Rule.sum(derive=Customer.balance, as_sum_of=Order.amount_total,
         where=lambda row: row.date_shipped is None)
This is readable code you can maintain. Not compiled. Not encrypted. Not hidden.

Open source (free forever): - Apache 2.0 license - No runtime fees - No enterprise edition paywall - Source code on GitHub: https://github.com/ApiLogicServer

Exit path exists: If you decide rules aren't for you, you can: 1. Stop using LogicBank, write procedural code instead 2. Keep your database, models, and API 3. No migration required - just remove the rules 4. Your data is never locked in

Can coexist with existing code: - Add LogicBank to existing Flask app - Use rules for new features, keep existing procedural code - Migrate incrementally (or not at all)

Bottom line: You're adopting an architecture pattern, not signing a vendor contract.


"How does business collaboration work?"

You work in your IDE (VS Code, standard Python workflow).

Business users can optionally explore ideas in WebGenAI (browser-based prototyping): - Creates working backend (API, data, business logic) from natural language - Exports to standard Python projects you can enhance - Your role: Take exported code, add production features (advanced logic, security, deployment)

Why this matters: - Business stops needing "dev time for prototypes" - You receive exportable Python (not proprietary platform code) - Standard deployment (containers, your tools)

Foundation for any frontend: The backend we generate works with vibe UIs, low-code tools, custom React apps - whatever you choose. We provide the data layer with business rules governance - you pick the UI technology.


"Is this production-ready?" (Battle-Tested)

Yes. 45 years of production use.

The history: - 1979: Invented in Boston (same time as VisiCalc) - Wang Pace: 7,500 production deployments - Versata: $3B startup backed by Microsoft/SAP/Informix/Ingres founders - 2025: Reborn as GenAI Logic for the agentic AI era

This isn't a new framework. It's a proven architecture refined over decades.

Production evidence: - Deployed at enterprise scale (Fortune 500s) - Handles millions of transactions - 45 years of edge cases discovered and fixed - Battle-tested patterns you can't get from fresh development

What this means for you: - You're not a beta tester - The gotchas have been found (and fixed) - The patterns are proven at scale - The architecture has survived 4 decades of technology shifts

Current adoption: - 1M+ downloads (yes, many are bots, but real usage too) - Open source community - Active development - Production deployments across industries

Comparison: - VisiCalc (1979) proved declarative worked for spreadsheets - We proved declarative worked for transactions - Both are still around because the architecture is sound

Risk assessment: - Technical risk: Low (proven architecture, standard tech stack) - Vendor risk: Low (open source, can fork if needed) - Team risk: Medium (learning curve exists, but documented) - Migration risk: Low (can coexist with existing code)

Bottom line: This isn't experimental. It's established technology adapted for modern AI.


"What IS it designed for?" (PRIMARY USE CASES)

Most common use case: Backend for Custom UIs (Vibe, React, Vue, etc.)

Customers ROUTINELY use GenAI-Logic as the backend for vibe-generated UIs:

  • Get production API instantly: 5 seconds from database to working API with filtering, pagination, sorting, optimistic locking, security/RBAC
  • Start simple, add logic later: Begin with CRUD, add business rules when complexity emerges
  • UI automatically inherits logic: Add validation/calculations in backend → all UIs get it immediately (web, mobile, agents)
  • Parallel development: Frontend team starts immediately with REAL API (not toy mocks that lack enterprise features)
  • Zero overhead when simple: Rules engine checks dependencies, finds none, commits - essentially transparent
  • Zero refactoring when complex: Today's "save note" becomes "audit changes + validate preferences" - no UI changes needed

Why this beats fat client architectures: - ❌ Fat client: Business logic in UI buttons → duplicated across web/mobile, bypassed by APIs, untestable - ✅ Thin client: Business logic at commit point → enforced everywhere (UIs, APIs, agents), single source of truth

The vibe workflow: 1. Vibe generates UI fast (cards, forms, dashboards) 2. GenAI-Logic enforces data integrity at commit (balances, totals, credit limits) 3. Add rules as requirements emerge - UI inherits automatically 4. Result: Move fast AND keep data correct

Other core use cases: - ✅ Multi-table derivations and rollups (Customer.balance = sum(orders), Order.total = sum(items)) - ✅ Business constraints across tables (balance <= credit_limit, can't ship without items) - ✅ Correctness guarantees across all change paths (insert, update, delete, FK changes) - ✅ Data layer for workflow nodes (each workflow step writes correct data)


"What CAN'T it do?" (Limitations)

Honest answer: Rules solve data integrity, not everything.

Don't use rules for:

  1. Complex algorithms - Machine learning models - Graph traversal algorithms - Optimization problems (traveling salesman, etc.) - Statistical computations - Why: These aren't data relationship problems. Use Python.

  2. Read-only queries and reports - Analytics dashboards - Complex JOINs for reporting - Data warehouse queries - Why: Rules enforce writes, not reads. Use SQL views, BI tools, or query optimization.

  3. One-off scripts - Data migrations - Batch data cleanup - Import/export utilities - Why: Rules overhead isn't worth it for run-once code. Use plain Python.

  4. Workflow orchestration (BUT: ideal for nodes within workflows) - ❌ Not a workflow engine: Can't do multi-step approval processes, long-running sagas, external system coordination - ✅ Perfect for workflow nodes: Ideal data layer WITHIN each workflow step - Why: Workflows orchestrate STEPS ("do these in order"). GenAI-Logic ensures DATA CORRECTNESS within each step. - Example: Order approval workflow:

    • Node 1: Create draft order ← GenAI-Logic ensures totals, credit check
    • Node 2: Send approval email ← Pure workflow
    • Node 3: Wait for response ← Pure workflow
    • Node 4: If approved, ship ← GenAI-Logic updates balances, inventory
    • Use together: Temporal/Airflow for process orchestration, GenAI-Logic for data operations within nodes
  5. Real-time streaming - High-frequency trading - IoT sensor processing - Log aggregation - Why: Transaction-based commit is wrong model. Use stream processors (Kafka, Flink).

Architecture fit: - Rules sit at the commit control point - They enforce what may persist, not how to orchestrate - Think: "guardrails for data integrity" not "workflow engine"

The test: If you can express it as "this data relationship is always true," use rules. If it's "do these steps in this order," use procedural code.

Example: - ✅ "Customer balance is always sum of unshipped orders" → Rule - ❌ "Send email, wait 3 days, send reminder" → Workflow (not a rule)

Can you mix? Yes. Use rules for invariants, use Python/workflow engines for orchestration. They complement each other.

Bottom line: Rules solve correctness for business logic (data relationships). They're not a general-purpose programming replacement.


Detailed Service Documentation

The sections below provide complete details on each service. I'll reference these as needed when we work together.

venv is required

To establish the virtual environment:

  1. Attempt to find a venv folder in the current project directory
  2. If not found, check parent or grandparent directories
  3. If no venv is found: - Ask the user for the venv location, OR - Offer to create one using python3 -m venv venv && source venv/bin/activate && pip install -r requirements.txt

Starting the server

IMPORTANT: Always activate the venv before starting the server.

# Activate venv first
source venv/bin/activate

# Then start server
python api_logic_server_run.py
# Then open: http://localhost:5656

Server Management Best Practices: - Before making structural changes (models, logic files), STOP the running server to avoid file locking issues - To stop server: pkill -f "python api_logic_server_run.py" or use Ctrl+C if running in foreground - USER ACTION: After making changes, user restarts server (e.g., python api_logic_server_run.py &) - Monitor startup output for errors, especially after database/model changes - If server fails to start after model changes, check that alembic migrations have been applied

Adding Business Logic

For Human Learning: - Primary Resource: https://apilogicserver.github.io/Docs/Logic/ - Complete rule type reference tables - Pattern tables and best practices - Video tutorials and examples - Learning path recommendations - Use this as your main learning resource for understanding rules

For AI Code Generation: - docs/training/*.prompt files contain patterns for AI assistants - AI reads these BEFORE implementing logic - Not intended as primary human learning materials

Rule Example:

# Edit: logic/declare_logic.py
Rule.sum(derive=Customer.Balance, as_sum_of=Order.AmountTotal)
Rule.constraint(validate=Customer, as_condition=lambda row: row.Balance <= row.CreditLimit)

⚠️ CRITICAL: docs/training/ Folder Organization

The docs/training/ folder contains ONLY universal, framework-level training materials: - ✅ Universal patternsgenai_logic_patterns.md - ✅ Implementation patternsprobabilistic_logic_guide.md - ✅ Code templatesprobabilistic_template.py - ✅ API references.prompt files (logic_bank_api.md, etc.)

DO NOT add project-specific content to docs/training/: - ❌ Project-specific instructions or configurations - ❌ Alembic migration guides specific to this project - ❌ File structures specific to basic_demo - ❌ Copilot instructions that reference specific project paths

WHY: This folder's content is designed to be reusable across ANY ApiLogicServer project using GenAI. Project-specific content should live in: - Logic implementation → logic/logic_discovery/ - Project docs → docs/ (outside training/) - Copilot instructions → .github/.copilot-instructions.md

See Eval-README.md for complete organization rules.

⚠️ MANDATORY WORKFLOW - BEFORE Implementing ANY Business Logic:

STOP ✋

WHEN USER PROVIDES A LOGIC PROMPT:

STEP 1: Read these files (DO THIS FIRST - NOT OPTIONAL):
   1. Eval-logic_bank_patterns.md           (Foundation - READ FIRST)
   2. Eval-logic_bank_api.md                (Deterministic rules - READ SECOND)
   3. Eval-probabilistic_logic.md           (AI/Probabilistic rules - READ THIRD)

STEP 2: Parse the prompt following logic_bank_api.md instructions:
   - Identify context phrase ("When X", "For Y", "On Z") → creates directory
   - Identify colon-terminated use cases → creates files
   - Follow directory structure rules EXACTLY as specified

STEP 3: Create the directory structure and files as instructed

STEP 4: Implement the rules following the training patterns

⚠️ CRITICAL - NO EXCEPTIONS:
   - You MUST read all three training files before implementing
   - You MUST follow the directory structure rules in logic_bank_api.md
   - You MUST NOT create flat files when context phrase is present
   - DO NOT skip files even if you think you know the pattern
   - These files contain failure patterns learned from production use

FAILURE MODE: Creating flat files in logic/logic_discovery/ when prompt has context phrase
CORRECT: Create logic/logic_discovery/<context_dir>/{__init__.py, use_case_files.py}

Training File Contents:

  1. Eval-logic_bank_patterns.md - Foundation patterns for ALL rule types - Event handler signatures (row, old_row, logic_row) - REQUIRED READING - Logging with logic_row.log() vs app_logger - Request Pattern with new_logic_row() - Rule API syntax dos and don'ts - Common anti-patterns to avoid

  2. Eval-logic_bank_api.md - Deterministic rules API - Rule.sum(), Rule.count(), Rule.formula(), Rule.constraint(), etc. - Complete API signatures with all parameters - References patterns file for implementation details

  3. Eval-probabilistic_logic.md - Probabilistic (AI) rules API - populate_ai_values() utility for AI-driven value computation - Intelligent selection patterns (supplier optimization, dynamic pricing, route selection) - Automatic audit trails and graceful fallbacks when API unavailable - References patterns file for general implementations - Works seamlessly with deterministic rules

Example Natural Language Logic for basic_demo:

Use case: Check Credit    
    1. Customer balance ≤ credit limit
    2. Customer balance = sum of unshipped Order amount_total
    3. Order amount_total = sum of Item amount
    4. Item amount = quantity × unit_price
    5. Item unit_price = copied from Product unit_price

Use case: App Integration
    1. Send Order to Kafka when date_shipped changes to non-null

How the Rules Engine Works:

1. Authoring (AI-assisted, human-reviewed) - You express business intent in natural language (via Copilot or any AI assistant) - The AI translates intent into a declarative DSL, under human review - Distills path-dependent logic into data-bound rules (table invariants) for automatic re-use - Resolves ambiguity using schema and relationships (e.g., copy vs reference) - Produces readable rules developers can inspect, edit, debug and version

This is where AI helps — authoring, not execution.

2. Engine Initialization (Deterministic analysis) - On startup, the non-RETE rule engine loads all rules - It computes dependencies deterministically from Rule types (derivations, constraints, actions) - Execution order is derived once, not from code paths

No compilation. No dependencies-from-pattern-matching. No inference from runtime behavior.

3. Runtime Enforcement (Commit-time) - Rules execute at transaction commit via SQLAlchemy commit events - All writes — APIs, workflows, UIs, agents — pass through the same rule set - Dependencies are already known; execution is deterministic and efficient - No rule is "called." No path can bypass enforcement. - Non-RETE optimizations: pruning, adjustment logic, delta-based aggregations - Cascading via old_row tracking - When Order.customer_id changes, adjusts BOTH old and new Customer.balance

The Key Developer Insight:

You declare invariants on data. You don't wire rules into flows. Invocation is automatic, on commit. The engine enforces them — everywhere, automatically, at commit.

Why Declarative Rules Matter:

LogicBank provides 44X code reduction (5 lines vs 220+ procedural) with: - Automatic ordering - add rules anywhere that makes sense, confident they'll run in the right order - Understanding intent - you see WHAT it does (business rules) vs HOW (procedural steps) - Maintenance - no need to find insertion points or trace execution paths

Why the Rules Engine is a Correctness Guarantee:

The "2 critical bugs" that even AI-generated procedural code missed: 1. Changing Order.customer_id - procedural code failed to adjust BOTH the old and new customer balances 2. Changing Item.product_id - procedural code failed to re-copy the unit_price from the new product

These bugs illustrate why declarative rules are mandatory, not optional. Even AI-generated procedural code requires explicit handlers for EVERY possible change path. It's easy to miss: - Foreign key changes affecting multiple parents - Transitive dependencies through multiple tables - Where clause conditions that include/exclude rows

The rules engine eliminates this entire class of bugs by automatically handling all change paths.

The Complete A/B Test:

For the full experiment comparing declarative rules vs AI-generated procedural code, including the 2 bugs Copilot missed and the AI's own analysis of why it failed, see: https://github.com/ApiLogicServer/ApiLogicServer-src/blob/main/api_logic_server_cli/prototypes/basic_demo/logic/procedural/declarative-vs-procedural-comparison.md

Discovery Systems

IMPORTANT: The project uses automated discovery systems that:

Logic Discovery: 1. Automatically loads business logic from logic/logic_discovery/*.py * CRITICAL: Always create separate files named for each use case (e.g., check_credit.py, app_integration.py) * Never put multiple use cases in use_case.py - that file is for templates/examples only 2. Discovers rules at startup via logic/logic_discovery/auto_discovery.py 3. No manual rule loading required - the discover_logic() function automatically finds and registers rules

API Discovery: 1. Automatically loads custom APIs from api/api_discovery/[service_name].py 2. Discovers services at startup via api/api_discovery/auto_discovery.py (called from api/customize_api.py) 3. No manual API registration required - services are automatically discovered and exposed

Do NOT duplicate by calling them manually. The discovery systems handle this automatically.

Implementation Locations: - Business rules: logic/logic_discovery/use_case.py - Custom APIs: api/api_discovery/[service_name].py - System automatically discovers and loads both

Pattern:

# logic/logic_discovery/use_case.py
def declare_logic():
    """Business logic rules for the application"""
    Rule.sum(derive=Customer.balance, as_sum_of=Order.amount_total)
    Rule.constraint(validate=Customer, as_condition=lambda row: row.balance <= row.credit_limit)
    # ... other rules

PATTERN RECOGNITION for Business Logic: When users provide natural language with multiple use cases like: - "on Placing Orders, Check Credit" + "Use case: App Integration"

ALWAYS create separate files: - logic/logic_discovery/check_credit.py - for credit checking rules - logic/logic_discovery/app_integration.py - for integration rules

NEVER put everything in use_case.py - that defeats the discovery system purpose.

MCP Integration

You (GitHub Copilot) can serve as an MCP client to query and update database entities using natural language!

MCP Discovery Endpoint (CRITICAL)

ALWAYS start with the standard MCP discovery endpoint:

curl -X GET "http://localhost:5656/.well-known/mcp.json"

This endpoint returns: - Available resources - Customer, Order, Item, Product, etc. - Supported methods - GET, PATCH, POST, DELETE per resource - Filterable fields - Which attributes can be used in filters - Base URL and paths - Resource endpoints like /Customer, /Order - Learning prompts - Instructions for MCP clients (fan-out patterns, email handling, response format)

MCP Discovery Pattern: 1. First: Query /.well-known/mcp.json to discover available resources 2. Then: Use discovered schema to construct API calls 3. Always: Follow JSON:API format for CRUD operations

When users request data operations:

  1. Authenticate first - Login to obtain JWT token:

    curl -X POST http://localhost:5656/api/auth/login \
      -H "Content-Type: application/json" \
      -d '{"username":"admin","password":"p"}'
    

  2. Execute operations - Use Bearer token for API calls:

    # Read: List entities
    curl -X GET http://localhost:5656/api/Customer/ \
      -H "Authorization: Bearer {token}"
    
    # Update: Change attributes (JSON:API format)
    curl -X PATCH http://localhost:5656/api/Customer/ALFKI/ \
      -H "Authorization: Bearer {token}" \
      -H "Content-Type: application/vnd.api+json" \
      -d '{"data": {"type": "Customer", "id": "ALFKI", "attributes": {"CreditLimit": 5000}}}'
    

  3. Handle constraint violations correctly - Error code 2001 is SUCCESS! - When LogicBank prevents invalid updates, report as: "✅ Business logic working - constraint prevented invalid operation" - Example: "balance (2102.00) exceeds credit (1000.00)" = logic is protecting data integrity

Natural Language → API Translation: - "List customers" → GET /api/Customer/ - "Show customer ALFKI" → GET /api/Customer/ALFKI/ - "Set ALFKI credit to 5000" → PATCH /api/Customer/ALFKI/ with CreditLimit - "What's ALFKI's balance?" → GET /api/Customer/ALFKI/ then extract Balance

Key Principle: Constraint violations (code 2001) demonstrate that declarative business logic is working correctly - celebrate these as successes, not failures!

See Eval-MCP_Copilot_Integration.md for authentication workflows, JSON:API formats, and architecture details.

API Interaction Best Practices

CRITICAL: Always Use API, Not Direct Database Access

When users request data operations (read, update, create, delete), ALWAYS use the REST API instead of direct database queries:

Correct Approach - Use API:

# Simple, readable commands that trigger business logic
curl 'http://localhost:5656/api/Customer/?page[limit]=100'
curl -X PATCH 'http://localhost:5656/api/Item/2/' \
  -H 'Content-Type: application/vnd.api+json' \
  -d '{"data": {"type": "Item", "id": "2", "attributes": {"quantity": 100}}}'

Wrong Approach - Direct Database:

# DON'T use sqlite3 commands for data operations
sqlite3 database/db.sqlite "UPDATE item SET quantity=100 WHERE id=2"

Why API is Required: 1. Business Logic Execution - Rules automatically fire (calculations, validations, constraints) 2. Data Integrity - Cascading updates handled correctly 3. Audit Trail - Operations logged through proper channels 4. Security - Authentication/authorization enforced 5. Testing Reality - Tests actual system behavior

Server Startup for API Operations:

When server needs to be started for API operations:

# Option 1: Background process (for interactive testing)
python api_logic_server_run.py &
sleep 5  # Wait for full startup (not 3 seconds - too short!)

# Option 2: Use existing terminal/process
# Check if already running: lsof -i :5656

Common Mistakes to Avoid:

  1. Insufficient startup wait: sleep 3 often fails - ✅ Use: sleep 5 or check with curl in retry loop

  2. Complex inline Python: Piping JSON to python3 -c with complex list comprehensions - ✅ Use: Simple curl commands, pipe to jq for filtering, or save to file first

  3. Database queries for CRUD: Using sqlite3 commands - ✅ Use: API endpoints that trigger business logic

Simple API Query Patterns:

# Get all records (simple, reliable)
curl 'http://localhost:5656/api/Customer/'

# Get specific record by ID
curl 'http://localhost:5656/api/Customer/1/'

# Get related records (follow relationships)
curl 'http://localhost:5656/api/Customer/1/OrderList'

# Filter results (use bracket notation)
curl 'http://localhost:5656/api/Customer/?filter[name]=Alice'

# Limit results
curl 'http://localhost:5656/api/Customer/?page[limit]=10'

Parsing JSON Responses:

# Simple: Pipe to python -m json.tool for pretty printing
curl 'http://localhost:5656/api/Customer/' | python3 -m json.tool

# Better: Use jq if available
curl 'http://localhost:5656/api/Customer/' | jq '.data[] | {id, name: .attributes.name}'

# Alternative: Save to file first, then parse
curl 'http://localhost:5656/api/Customer/' > customers.json
python3 -c "import json; data=json.load(open('customers.json')); print(data['data'][0])"

Key Principle: The API is not just a convenience - it's the only correct way to interact with data because it ensures business logic executes properly.

Automated Testing

⚠️ BEFORE Creating Tests:

STOP ✋
READ Eval-testing.md FIRST (555 lines - comprehensive guide)

This contains EVERY bug pattern from achieving 11/11 test success:
- Rule #0: Test Repeatability (timestamps for uniqueness)
- Rule #0.5: Behave Step Ordering (specific before general)
- Top 5 Critical Bugs (common AI mistakes)
- Filter format: filter[column]=value (NOT OData)
- No circular imports (API only)
- Null-safe constraints
- Fresh test data (timestamps for uniqueness)

THEN create tests following patterns exactly.

Why This Matters: AI-generated tests fail 80% of the time without reading this guide first. The training material documents every common mistake (circular imports, wrong filter format, null-unsafe constraints, step ordering, etc.) with exact fixes. This guide achieved 11/11 test success (100%) and contains all discovered patterns.

Eval-testing.md explains how to create Behave tests from declarative rules, execute test suites, and generate automated documentation with complete logic traceability.

Key capabilities: - Create tests from rules - Analyze declarative rules to generate appropriate test scenarios - Execute test suite - Run all tests with one command (Launch Configuration: "Behave Run") - Generate documentation - Auto-create wiki reports showing requirements, tests, rules used, and execution traces - Requirements traceability - Complete chain from business requirement → test → declarative rule → execution log

The Innovation: Unlike traditional testing, Behave Logic Reports show which declarative rules fired during each test, providing complete transparency from requirement to execution. This solves the 44X advantage in testing - tests verify "what" (business rules) not "how" (procedural code).

See test/api_logic_server_behave/ for examples and published report.

Common Mistakes to Avoid (from testing.md): 1. ❌ Wrong filter: filter="name eq 'Alice'" → ✅ Use: filter[name]="Alice" 2. ❌ Importing logic/database modules → ✅ Import only: behave, requests, test_utils 3. ❌ Unsafe constraints: row.x <= row.y → ✅ Use: row.x is None or row.y is None or row.x <= row.y 4. ❌ Step execution: step_impl.execute_step() → ✅ Use: context.execute_steps('when Step') 5. ❌ Reusing test data → ✅ Create fresh with: f"{name} {int(time.time()*1000)}"

For detailed test creation patterns, see Eval-testing.md which documents all critical rules including Rule #0.5 (Behave Step Ordering).

Critical Learnings: Behave Logic Report Generation

PROBLEM: Reports not showing logic details for test scenarios.

ROOT CAUSES DISCOVERED:

  1. Empty behave.log (Most Common Issue) - ❌ Running: python behave_run.py - ✅ Must run: python behave_run.py --outfile=logs/behave.log - Without --outfile, behave.log remains empty (0 bytes) and report has no content

  2. Scenario Name Mismatch - Log files must match scenario names from .feature file - ❌ Using custom names: scenario_name = f'B2B Order - {quantity} {product_name}' - ✅ Use actual scenario: scenario_name = context.scenario.name - Report generator truncates to 25 chars and looks for matching .log files

  3. Report Generator Logic Bug - Original code only showed logic when blank line followed "Then" step - Only worked for ~30% of scenarios (those with blank lines in behave.log) - ✅ Fixed: Trigger logic display when next scenario starts OR at end of file

CORRECT PATTERN FOR WHEN STEPS:

@when('B2B order placed for "{customer_name}" with {quantity:d} {product_name}')
def step_impl(context, customer_name, quantity, product_name):
    """
    Phase 2: CREATE using OrderB2B API - Tests OrderB2B integration
    """
    scenario_name = context.scenario.name  # ← CRITICAL: Use actual scenario name
    test_utils.prt(f'\n{scenario_name}\n', scenario_name)

    # ... test implementation ...

WORKFLOW FOR REPORT GENERATION:

# 1. Run tests WITH outfile to generate behave.log
python behave_run.py --outfile=logs/behave.log

# 2. Generate report (reads behave.log + scenario_logic_logs/*.log)
python behave_logic_report.py run

# 3. View report
open reports/Behave\ Logic\ Report.md

WHAT THE REPORT SHOWS: - Each scenario gets a <details> section with: - Rules Used: Which declarative rules fired (numbered list) - Logic Log: Complete trace showing before→after values for all adjustments - Demonstrates the 44X code reduction by showing rule automation

LOGIC LOG FORMATTING:

When user says "show me the logic log": Display the complete logic execution from the most recent terminal output, showing the full trace from "Logic Phase: ROW LOGIC" through "Logic Phase: COMPLETE" with all row details intact (do NOT use grep commands to extract). Include: - Complete Logic Phase sections (ROW LOGIC, COMMIT LOGIC, AFTER_FLUSH LOGIC, COMPLETE) - All rule execution lines with full row details - "These Rules Fired" summary section - Format as code block for readability

When displaying logic logs to users, format them with proper hierarchical indentation like the debug console (see https://apilogicserver.github.io/Docs/Logic-Debug/):

Logic Phase: ROW LOGIC (session=0x...)
..Item[None] {Insert - client} id: None, order_id: 1, product_id: 6, quantity: 10
..Item[None] {Formula unit_price} unit_price: [None-->] 105.0
....SysSupplierReq[None] {Insert - Supplier AI Request} product_id: 6
....SysSupplierReq[None] {Event - calling AI} chosen_unit_price: [None-->] 105.0
..Item[None] {Formula amount} amount: [None-->] 1050.0
..Item[None] {adjust parent Order.amount_total}
....Order[1] {Update - Adjusting order: amount_total} amount_total: [300.0-->] 1350.0

Key formatting rules: - .. prefix = nesting level (2 dots = parent, 4 dots = child/nested object, 6 dots = deeper nesting) - ONE LINE per rule execution - no line wrapping - Each line shows: Class[id] {action/reason} key_attributes - Value changes shown as: [old_value-->] new_value - Hierarchical indentation (dots) shows call depth and parent-child relationships - Only show relevant attributes, not all row details

EXTRACTING CLEAN LOGIC LOGS:

To get properly formatted logs (one line per rule, no wrapping), use this command:

# Extract clean logic log from server.log
grep -A 100 "Logic Phase:.*ROW LOGIC" server.log | \
  awk -F' row: ' '{print $1}' | \
  grep -E "^\.\.|^Logic Phase:" | \
  head -50

This removes verbose session/row details and prevents line wrapping.

DEBUGGING TIPS:

# Check if behave.log has content
ls -lh logs/behave.log  # Should be several KB, not 0 bytes

# Check if scenario logs exist with correct names
ls logs/scenario_logic_logs/ | head -10

# Count detail sections in report (should equal number of scenarios)
grep -c "<details markdown>" reports/Behave\ Logic\ Report.md

# View a specific scenario's log directly
cat logs/scenario_logic_logs/Delete_Item_Reduces_Order.log

KEY INSIGHT: The report generator uses a two-step process: 1. Reads behave.log for scenario structure (Given/When/Then steps) 2. Matches scenario names to .log files in scenario_logic_logs/ 3. Injects logic details at the right location in the report

If scenario names don't match between behave.log and .log filenames, logic details won't appear!

Adding MCP UI

The API is automatically MCP-enabled. The project includes a comprehensive MCP client executor at integration/mcp/mcp_client_executor.py, but to enable the user interface for MCP requests, you must run this command:

genai-logic genai-add-mcp-client

CRITICAL DISTINCTION: - integration/mcp/mcp_client_executor.py = MCP processing engine (already exists) - genai-logic genai-add-mcp-client = Command to add SysMcp table and UI infrastructure (must be run)

When users ask to "Create the MCP client executor", they mean run the genai-logic genai-add-mcp-client command, NOT recreate the existing processing engine.

This command adds: 1. SysMcp table for business users to enter natural language requests 2. Admin App integration for MCP request interface 3. Database infrastructure for MCP client operations

Configuring Admin UI

This is built when project is created - no need to add it. Customize by editing the underlying yaml.

# Edit: ui/admin/admin.yaml
resources:
  Customer:
    attributes:
      - name: CompanyName
        search: true
        sort: true

Create and Customize React Apps

REQUIRED METHOD: Complete customization is provided by generating a React Application (requires OpenAI key, Node):

DO NOT use create-react-app or npx create-react-app ALWAYS use this command instead:

# Create: ui/admin/my-app-name
genai-logic genai-add-app --app-name=my-app-name --vibe

Then, npm install and npm start

Temporary restriction: security must be disabled.

IMPORTANT: When working with React apps, ALWAYS read docs/training first. This file contains critical data access provider configuration that was built when the project was created. The data provider handles JSON:API communication and record context - ignore this at your peril.

Customize using CoPilot chat, with docs/training.

React Component Development Best Practices

Critical Pattern for List/Card Views: When implementing custom views (like card layouts) in React Admin components:

  1. Use useListContext() correctly: Access data as an array, not as an object with ids

    // CORRECT Pattern:
    const { data, isLoading } = useListContext();
    return (
      <Grid container spacing={2}>
        {data?.map(record => (
          <Grid item key={record.id}>
            <CustomCard record={record} />
          </Grid>
        ))}
      </Grid>
    );
    
    // AVOID: Trying to use data[id] pattern - this is for older React Admin versions
    

  2. Component Naming Consistency: Ensure component names match their usage in JSX - mismatched names cause runtime errors.

  3. Simple Error Handling: Use straightforward loading states rather than complex error checking:

    if (isLoading) return <div>Loading...</div>;
    

  4. 🃏 Card View Action Links (Show, Edit, Delete) IMPORTANT: All card views (e.g., Product cards) must include action links or buttons for Show, Edit, and Delete for each record (not just the display fields), matching the functionality of the table/list view.

Common Mistakes to Avoid: - Using { data, ids } destructuring and trying to map over ids - this pattern is outdated - Creating complex error handling when simple loading checks suffice - Not referencing existing working implementations before creating new patterns

Security - Role-Based Access Control

Configure:

genai-logic add-auth --provider-type=sql --db-url=
genai-logic add-auth --provider-type=sql --db_url=postgresql://postgres:p@localhost/authdb

genai-logic add-auth --provider-type=keycloak --db-url=localhost
genai-logic add-auth --provider-type=keycloak --db-url=hardened

genai-logic add-auth --provider-type=None # to disable

Keycloak quick start (more information here:)

cd devops/keycloak
docker compose up
genai-logic add-auth --provider-type=keycloak --db-url=localhost

For more on KeyCloak: https://apilogicserver.github.io/Docs/Security-Keycloak/

Declaration:

# Edit: security/declare_security.py
Grant(on_entity=Customer, to_role=sales, filter=lambda: Customer.SalesRep == current_user())

Testing with Security Enabled

CRITICAL: When SECURITY_ENABLED=True, test code must obtain and include JWT authentication tokens.

Pattern for test steps:

from pathlib import Path
import os
from dotenv import load_dotenv

# Load config to check SECURITY_ENABLED
config_path = Path(__file__).parent.parent.parent.parent.parent / 'config' / 'default.env'
load_dotenv(config_path)

# Cache for auth token (obtained once per test session)
_auth_token = None

def get_auth_token():
    """Login and get JWT token if security is enabled"""
    global _auth_token

    if _auth_token is not None:
        return _auth_token

    # Login with default admin credentials
    login_url = f'{BASE_URL}/api/auth/login'
    login_data = {'username': 'admin', 'password': 'p'}

    response = requests.post(login_url, json=login_data)
    if response.status_code == 200:
        _auth_token = response.json().get('access_token')
        return _auth_token
    else:
        raise Exception(f"Login failed: {response.status_code}")

def get_headers():
    """Get headers including auth token if security is enabled"""
    security_enabled = os.getenv('SECURITY_ENABLED', 'false').lower() not in ['false', 'no']

    headers = {'Content-Type': 'application/json'}

    if security_enabled:
        token = get_auth_token()
        if token:
            headers['Authorization'] = f'Bearer {token}'

    return headers

# Use in all API requests
response = requests.post(url=api_url, json=data, headers=get_headers())

Key points: - Tests DO NOT automatically include auth headers - you must code this pattern - Token is cached to avoid repeated logins during test session - Pattern works for both SECURITY_ENABLED=True and SECURITY_ENABLED=False - See test/api_logic_server_behave/features/steps/order_processing_steps.py for complete example

Adding Custom API Endpoints

For simple endpoints:

# Edit: api/customize_api.py
@app.route('/api/custom-endpoint')
def my_endpoint():
    return {"message": "Custom endpoint"}

Creating Advanced B2B Integration APIs with Natural Language

Users can create sophisticated custom API endpoints for B2B integration using natural language. The system automatically generates and discovers:

  1. Custom API Service (api/api_discovery/[service_name].py) - automatically discovered by api/api_discovery/auto_discovery.py
  2. Row Dict Mapper (integration/row_dict_maps/[MapperName].py)

Example Implementation: This project includes a working OrderB2B API that demonstrates the complete pattern: - API: api/api_discovery/order_b2b_service.py - Mapper: integration/row_dict_maps/OrderB2BMapper.py - Test Cases: test_requests.http and test_b2b_order_api.py

Pattern Recognition: When users describe B2B integration scenarios involving: - External partner data formats (✅ Account → Customer lookup) - Field aliasing/renaming (✅ "Name" → Product.name, "QuantityOrdered" → Item.quantity) - Nested data structures (✅ Items array handling) - Lookups and joins (✅ Customer by name, Product by name) - Data transformation (✅ External format to internal models)

Generate both the API service and corresponding Row Dict Mapper following these patterns:

API Service Template (api/api_discovery/[service_name].py) - Keep it concise:

from flask import request
from safrs import jsonapi_rpc
import safrs
from integration.row_dict_maps.OrderB2BMapper import OrderB2BMapper
import logging

app_logger = logging.getLogger("api_logic_server_app")

def add_service(app, api, project_dir, swagger_host: str, PORT: str, method_decorators = []):
    api.expose_object(OrderB2BEndPoint)

class OrderB2BEndPoint(safrs.JABase):
    @classmethod
    @jsonapi_rpc(http_methods=["POST"])
    def OrderB2B(self, *args, **kwargs):  # yaml comment => swagger description
        """ # yaml creates Swagger description
            args :
                data:
                    Account: "Alice"
                    Notes: "Rush order for Q4 promotion"
                    Items :
                    - Name: "Widget"
                      QuantityOrdered: 5
                    - Name: "Gadget"
                      QuantityOrdered: 3
            ---

        Creates B2B orders from external partner systems with automatic lookups and business logic.
        Features automatic customer/product lookups by name, unit price copying, 
        amount calculations, customer balance updates, and credit limit validation.
        """
        db = safrs.DB
        session = db.session

        try:
            mapper_def = OrderB2BMapper()
            request_dict_data = request.json["meta"]["args"]["data"]

            app_logger.info(f"OrderB2B: Processing order for account: {request_dict_data.get('Account')}")

            sql_alchemy_row = mapper_def.dict_to_row(row_dict=request_dict_data, session=session)

            session.add(sql_alchemy_row)
            session.flush()  # Ensures ID is generated before accessing it

            order_id = sql_alchemy_row.id
            customer_name = sql_alchemy_row.customer.name if sql_alchemy_row.customer else "Unknown"
            item_count = len(sql_alchemy_row.ItemList)

            return {
                "message": "B2B Order created successfully", 
                "order_id": order_id,
                "customer": customer_name,
                "items_count": item_count
            }

        except Exception as e:
            app_logger.error(f"OrderB2B: Error creating order: {str(e)}")
            session.rollback()
            return {"error": "Failed to create B2B order", "details": str(e)}, 400

IMPORTANT: The project includes a working B2B integration example: - API Endpoint: OrderB2BEndPoint.OrderB2B - Creates orders from external partner format - Error Handling: Proper exception handling with session rollback for failed operations - Business Logic: Automatic inheritance of all LogicBank rules (pricing, calculations, validation) - Testing: Comprehensive test suite demonstrating success and error scenarios - Documentation: Professional Swagger docs with YAML examples using real database data

When creating new B2B APIs, follow this proven pattern: - Use session.flush() when you need generated IDs before commit - Include proper error handling with try/catch and session.rollback() - Provide meaningful success messages with key information (ID, customer, item count) - Use YAML format in docstrings for clean Swagger documentation - Always use actual database data in examples (check with sqlite3 queries)

AI Anti-Patterns to Avoid: - Don't assume CRUD operations: If user asks for "create order API", only implement POST/insert (ask if they need GET/PUT/DELETE) - Don't add "enterprise" features unless specifically requested: - Detailed logging/monitoring beyond basic debugging - Complex response objects with metadata - Extensive documentation/comments - HTTP status code handling beyond defaults - Don't import unused libraries: Skip logging, jsonify, etc. unless actually needed - Don't over-engineer: Simple success messages beat complex response objects

Swagger Examples Must Use Real Data: When creating YAML docstring examples, use actual database data. Check first:

sqlite3 database/db.sqlite "SELECT name FROM customer LIMIT 3;"
sqlite3 database/db.sqlite "SELECT name FROM product LIMIT 3;"

Getting Sample Data for Tests:

# Check actual customer names
sqlite3 database/db.sqlite "SELECT name FROM customer LIMIT 5;"

# Check actual product names  
sqlite3 database/db.sqlite "SELECT name FROM product LIMIT 5;"
Never assume data from other databases (like Northwind's "ALFKI") - always use the current project's actual data.

Row Dict Mapper Template (integration/row_dict_maps/[MapperName].py):

from integration.system.RowDictMapper import RowDictMapper
from database import models

class OrderB2BMapper(RowDictMapper):
    def __init__(self):
        """
        B2B Order API Mapper for external partner integration.

        Maps external B2B format to internal Order/Item structure:
        - 'Account' field maps to Customer lookup by name
        - 'Notes' field maps directly to Order notes
        - 'Items' array with 'Name' and 'QuantityOrdered' maps to Item records
        """
        mapper = super(OrderB2BMapper, self).__init__(
            model_class=models.Order,
            alias="Order",
            fields=[
                (models.Order.notes, "Notes"),
                # customer_id will be set via parent lookup
                # amount_total will be calculated by business logic
                # CreatedOn will be set by business logic
            ],
            parent_lookups=[
                (models.Customer, [(models.Customer.name, 'Account')])
            ],
            related=[
                ItemB2BMapper()
            ]
        )
        return mapper

class ItemB2BMapper(RowDictMapper):
    def __init__(self):
        """
        B2B Item Mapper for order line items.

        Maps external item format to internal Item structure:
        - 'Name' field maps to Product lookup by name
        - 'QuantityOrdered' maps to Item quantity
        """
        mapper = super(ItemB2BMapper, self).__init__(
            model_class=models.Item,
            alias="Items",
            fields=[
                (models.Item.quantity, "QuantityOrdered"),
                # unit_price will be copied from product by business logic
                # amount will be calculated by business logic (quantity * unit_price)
            ],
            parent_lookups=[
                (models.Product, [(models.Product.name, 'Name')])
            ],
            isParent=False
        )
        return mapper

Key Components for Natural Language Processing: - Field Aliasing: (models.Table.field, "ExternalName") - Parent Lookups: (models.ParentTable, [(models.ParentTable.lookup_field, 'ExternalKey')]) - Related Entities: Nested RowDictMapper instances for child records - Automatic Joins: System handles foreign key relationships automatically

Business Logic Integration: All generated APIs automatically inherit the full LogicBank rule engine through the discovery systems (logic/logic_discovery/auto_discovery.py and api/api_discovery/auto_discovery.py), ensuring data integrity, calculations, and constraints without additional code. Rules are automatically loaded from logic/logic_discovery/use_case.py and APIs from api/api_discovery/[service_name].py at startup.

Testing B2B APIs: The project includes comprehensive testing infrastructure: - REST Client Tests: test_requests.http - Test directly in VS Code with REST Client extension - Python Test Suite: test_b2b_order_api.py - Automated testing with requests library - Swagger UI: http://localhost:5656/api - Interactive API testing and documentation - Sample Requests: sample_b2b_request.json - Copy-paste examples for testing

Working Example Results: The OrderB2B API demonstrates: - ✅ External format mapping (Account → Customer, Name → Product) - ✅ Automatic lookups with error handling (missing customer/product detection) - ✅ Business logic inheritance (unit price copying, amount calculations, balance updates) - ✅ Professional Swagger documentation with YAML examples - ✅ Complete test coverage (success cases and error scenarios)

Customize Models - Add Tables, Attributes

To add tables / columns to the database (highly impactful - request permission):

  1. Update database/models.py with new models/columns
  2. Generate and apply Alembic migration (see database/alembic/readme.md):
    cd database
    alembic revision --autogenerate -m "Description of changes"
    
  3. CRITICAL - Edit the migration file: - alembic --autogenerate detects ALL differences between models.py and database - Open the generated file in database/alembic/versions/ - Remove ALL unwanted changes (ALTER TABLE on existing tables) - Keep ONLY your intended changes (e.g., CREATE TABLE for new audit table) - Simplify downgrade() function to reverse only your changes
  4. Apply the edited migration:
    alembic upgrade head
    
  5. Offer to update ui/admin/admin.yaml to add the new table or column to the Admin UI.

General Migration Notes: - Stop the server before running migrations to avoid database locking - When adding new models, follow existing patterns in models.py - Models should not contain __bind_key__ - USER ACTION REQUIRED: Restart server after migrations

See: https://apilogicserver.github.io/Docs/Database-Changes/#use-alembic-to-update-database-schema-from-model

If altering database/models.py, be sure to follow the patterns shown in the existing models. Note they do not contain a __bind_key__.

Addressing Missing Attributes during logic loading at project startup

First, check for misspelling (logic vs database/models.py), and repair.

If there are no obvious misspellings, ask for permission to add attributes; if granted, proceed as above.

Customize Models - Add Derived attributes

Here is a sample derived attribute, proper_salary:

# add derived attribute: https://github.com/thomaxxl/safrs/blob/master/examples/demo_pythonanywhere_com.py
@add_method(models.Employee)
@jsonapi_attr
def __proper_salary__(self):  # type: ignore [no-redef]
    import database.models as models
    import decimal
    if isinstance(self, models.Employee):
        rtn_value = self.Salary
        if rtn_value is None:
          rtn_value = decimal.Decimal('0')
        rtn_value = decimal.Decimal('1.25') * rtn_value
        self._proper_salary = int(rtn_value)
        return self._proper_salary
    else:
        rtn_value = decimal.Decimal('0')
        self._proper_salary = int(rtn_value)
        return self._proper_salary

@add_method(models.Employee)
@__proper_salary__.setter
def _proper_salary(self, value):  # type: ignore [no-redef]
    self._proper_salary = value
    print(f'_proper_salary={self._proper_salary}')
    pass

models.Employee.ProperSalary = __proper_salary__

When customizing SQLAlchemy models:

  • Don't use direct comparisons with database fields in computed properties
  • Convert to Python values first using float(), int(), str()
  • Use property() function instead of @jsonapi_attr for computed properties
  • Always add error handling for type conversions

Adding events

LogicBank rules are the preferred approach to logic, but you will sometimes need to add events. This is done in logic/declare_logic.py (important: the function MUST come first):

# Example: Log email activity after SysEmail is committed

def sys_email_after_commit(row: models.SysEmail, old_row: models.SysEmail, logic_row: LogicRow):
    """
    After SysEmail is committed, log 'email sent' 
    unless the customer has opted out
    """
    if not row.customer.email_opt_out:
        logic_row.log(f"📧 Email sent to {row.customer.name} - Subject: {row.subject}")
    else:
        logic_row.log(f"🚫 Email blocked for {row.customer.name} - Customer opted out")

Rule.commit_row_event(on_class=SysEmail, calling=sys_email_after_commit)

LogicBank event types include: - Rule.commit_row_event() - fires after transaction commits - Rule.after_insert() - fires after row insert - Rule.after_update() - fires after row update
- Rule.after_delete() - fires after row delete

All events receive (row, old_row, logic_row) parameters and should use logic_row.log() for logging.

📁 Key Directories

  • logic/ - Business rules (declarative)
  • api/ - REST API customization
  • security/ - Authentication/authorization
  • database/ - Data models and schemas
  • ui/admin/ - Admin interface configuration
  • ui/app/ - Alternative Angular admin app

💡 Helpful Context

  • This uses Flask + SQLAlchemy + SAFRS for JSON:API
  • Admin UI is React-based with automatic CRUD generation
  • Business logic uses LogicBank (declarative rule engine)
  • Everything is auto-generated from database introspection
  • Focus on CUSTOMIZATION, not re-creation
  • Use CoPilot to assist with logic translation and API generation