Internals - LLM
WebGenAI - Projects from Prompts
LLM (Large Language Model) usage is currently based on OpenAI, for:
- Creating SQLAlchemy data models
- Translating NL (Natural Language) logic into Python rules
LLM Technology Usage
Learning
When you install the Manager, it creates the structures shown below. These are used to "train" ChatGPT about how to create models, and how to translate logic.
Invocation
The api_logic_server_cli/genai
files are called by the CLI (which is called by WebGenAI) to create projects, iterate them, repair them, and so forth. api_logic_server_cli/genai/genai_svcs.py
is a collection of common services used by all, including the function call_chatgpt()
shown below.
ChatGPT Results: WGResult
Initially, we called ChatGPT and got the standard response, which in our case was a text file of code. We parsed that to find the code we wanted, and merged it into the project.
That proved to be an unstable choice. So, we now train ChatGPT results to return smaller code snippets, in json format. This is defined by WGResults
.It also contains the definitions of the WGResult
objects. Note these are defined both in the learnings, amd in genai_svcs.py
.
docs
: requests and responses
Requests and responses are stored in the project, which can be used for subsequent requests and error correction. They can be stored in the location noted below (both the docs
directory and its sub-directories):
Observe that a typical call the ChatGPT is a "conversation" - a list of messages
(requests and responses) provided as an argument to ChatGPT.
GenAI Project Creation Overview
GenAI is a wrapper around the existing API Logic Server project-creation flow. API Logic Server already knows how to build an API + Admin UI from a database (or a SQLAlchemy model). GenAI's twist is to let you start with a natural-language prompt: it has ChatGPT describe the model, feeds that model to the standard API Logic Server pipeline, and keeps trying until a compilable model appears. The dominant design constraint is that LLM output can be wrong, so GenAI treats every request as a three-attempt mission with automatic retries, diagnostics capture, and manual escape hatches.
Execution Stack
- CLI (
api_logic_server_cli/cli.py
) β thegenai
click command collects prompt options (--using
,--retries
,--repaired-response
, etc). - Retry wrapper (
genai_cli_with_retry
ingenai.py
) β for each attempt it spins up a standard project run, catches failures, snapshots diagnostics, and decides whether to try again. - GenAI core (
GenAI.create_db_models
) β called fromProjectRun
; it asks ChatGPT for a model, fixes obvious issues, writessystem/genai/temp/create_db_models.py
, and recordspost_error
if the model is unusable. - Project runner (
ProjectRun.create_project
inapi_logic_server.py
) β the same engine used for database- and model-driven starts; it compiles the generated model, creates the SQLite database, scaffolds the project, and merges logic comments.
Each layer reports errors upward; the retry wrapper decides whether to try again, toggle safeguards, or stop.
Flow Diagram
βββββββββββββββ
β CLI β
β genai() β
ββββββ¬βββββββββ
β (1)
βΌ
βββββββββββββββββββββββββββββββ
β GenAI module β
β genai_cli_with_retry() β
β βββββββββββββββββββββββββ β
β β attempt loop (up to 3)ββββ
β ββββββββββββ¬βββββββββββββ
βββββββββββββββΌβββββββββββββ
β (2)
βΌ
ββββββββββββββββββββββββββββββββ
β api_logic_server.ProjectRun β
β β³ GenAI.create_db_models() β
β β³ create_db_from_model.py β
βββββββββββββ¬βββββββββββββββββββ
β (3)
βΌ
ββββββββββββββββ
β Generated β
β project β
ββββββββββββββββ
Inside a Single Attempt
- Resolve the prompt β
GenAI.get_prompt_messages()
reads the--using
argument (text,.prompt
file, or conversation directory) and prepends any training inserts. - Call ChatGPT β
genai_svcs.call_chatgpt()
returns JSON withmodels
,rules
, andtest_data_rows
(or, with--repaired-response
, a saved JSON file is used instead of an API call). - Fix and emit the model β
genai_svcs.fix_and_write_model_file()
cleans the JSON, writescreate_db_models.py
, and populatespost_error
if the response still can't compile (eg, tables instead of classes). - Persist diagnostics β
save_prompt_messages_to_system_genai_temp_project()
copies prompts, responses, and the generated model intosystem/genai/temp/<project>
. - Hand off to API Logic Server β
ProjectRun
executescreate_db_models.py
, builds the SQLite database, scaffolds the project, and merges the prompt logic comments intodeclare_logic.py
.
During this flow no exceptions are raised inside GenAI; instead self.post_error
carries the message back to ProjectRun
, which raises when non-empty so the retry loop can react.
Three-Attempt Strategy
The retry logic in genai_cli_with_retry()
keeps project creation resilient:
- Loop control β the CLI supplies
retries
(defaults to three). The wrapper keeps looping until one attempt finishes or the budget is exhausted. - Failure detection β any exception from
ProjectRun.create_project()
or a non-emptygen_ai.post_error
marks the attempt as failed. - Automatic diagnostics β work files are copied to
system/genai/temp/<project>_<try#>
before the next attempt; in-place conversation folders have the latest.response
removed so the user can iteratively repair the conversation. - Adaptive retry β if the failure mentions βCould not determine join conditionβ the next run toggles
use_relns=False
(foreign keys remain, only inference is skipped). - Exit conditions β success breaks the loop; persistent failure logs and exits with status 1 so calling automation can react.
This approach mirrors real-world LLM behaviour: one response might be malformed, but a clean run usually appears within three tries, and each failure leaves a breadcrumb trail for debugging.
Note the same pr
instance variable is used for all 3 tries, so initialization that would normally occur in init()
is in GenAI.create_db_models()
.
Pseudo-code
Note the Key Module Map at the end of genai.py:
def key_module_map():
""" does not execute - strictly fo find key modules """
import api_logic_server_cli.api_logic_server as als
import api_logic_server_cli.create_from_model.create_db_from_model as create_db_from_model
genai_cli_with_retry() # called from cli.genai for retries
# try/catch/retry loop!
als.ProjectRun() # calls api_logic_server.ProjectRun
genai = GenAI(Project()) # called from api_logic_server.ProjectRun
genai.__init__() # main driver, calls...
genai.get_prompt_messages() # get self.messages from file/dir/text/arg
genai.fix_and_write_model_file('response_data') # write create_db_models.py for db creation
genai.save_files_to_system_genai_temp_project() # save prompt, response and create_db_models.py
# returns to api_logic_server, which...
create_db_from_model.create_db() # creates create_db_models.sqlite from create_db_models.py
# creates project from that db; and calls...
genai.insert_logic_into_created_project() # merge logic (comments) into declare_logic.py
Manual Recovery Hooks
- Repaired responses β run
ApiLogicServer genai --using prompt_dir --repaired-response system/genai/temp/chatgpt_retry.response
after editing the JSON. The retry loop treats this as the last attempt (no additional retries needed). - Conversation iteration β reuse the same
--using
directory; GenAI appends numbered prompt/response files so you can evolve a design across attempts. - Logic review β generated logic is inserted as commented guidance in
declare_logic.py
; keep--active-rules
disabled until you have reviewed the suggestions.
GenAI Module Files
Core Files
genai.py
- Main GenAI class and project creation drivergenai_svcs.py
- Internal service routines used by other genai scripts (ChatGPT API calls, model fixing, response processing)genai_utils.py
- Additional CLI functions for GenAI utilities and operations
Specialized Generators
genai_react_app.py
- Creates React projects inside an existing GenAI projectgenai_graphics.py
- Generates graphics and visualizations for GenAI projectsgenai_logic_builder.py
- Builds and suggests business logic rulesgenai_mcp.py
- Model Context Protocol integration
Supporting Files
client.py
- Client interface utilitiesjson2rules.py
- Converts JSON rule definitions to LogicBank rulesgenai_fatal_excp.py
- Fatal exception handling for GenAI operationslogic_bank_apiX.prompt
- Prompt template for LogicBank API training
Deferred Error Handling
The post_error
instance variable implements a deferred error reporting pattern in the GenAI module, allowing the system to detect and recover from common ChatGPT response formatting issues through an automated retry mechanism.