Architecture
curompt is organized as a modular Go project so that each concern (collecting prompts, evaluating, reporting) can evolve independently.
curompt/
├── cmd/curompt # Cobra entrypoint
├── internal/analyzer # Static analysis (sections, tokens, lint rules)
├── internal/collector # Log/import pipelines
├── internal/evaluator # Provider orchestration & schema fit
├── internal/provider # Claude / Gemini / OpenAI / local adapters
├── internal/reporter # Markdown + structured reports
├── internal/repository # SQLite persistence
└── internal/scorer # Composite scoring engine
Data flow
- Collector ingests prompt sources (local files, repos, CLI pipes).
- Analyzer runs static heuristics (section detection, duplication, forbidden terms).
- Evaluator optionally calls LLM providers to generate samples, then validates them against schemas.
- Scorer aggregates metrics into headline scores and flags regressions vs. previous runs.
- Reporter renders TUI summaries plus Markdown/JSON outputs for CI or docs.
All results are persisted in SQLite (~/.curompt/db.sqlite by default) so historical comparisons and dashboards can be built externally.
Providers
Each provider lives under internal/provider/<name> and implements a small interface:
type Provider interface {
Name() string
Evaluate(ctx context.Context, req EvalRequest) (EvalResult, error)
}
Adding a new provider typically means:
- Creating an adapter inside
internal/provider. - Registering it with the CLI via
internal/cli. - Adding config schema and documentation.
Reports
Reports are generated in layers:
internal/reporterbuilds data models (scores, suggestions, token diff).- Renderers output Markdown (default), JSON, or future HTML dashboards.
- CLI flags route the output—standard out for humans, files for CI artifacts.
This structure keeps the CLI thin and allows reuse from other entrypoints (e.g., API server or automation scripts) in the future.