Skip to main content
OrchestKit v6.7.1 — 67 skills, 38 agents, 77 hooks with Opus 4.6 support
OrchestKit
Skills

Scope Appropriate Architecture

Right-sizes architecture to project scope. Prevents over-engineering by classifying projects into 6 tiers and constraining pattern choices accordingly. Use when designing architecture, selecting patterns, or when brainstorming/implement detect a project tier.

Reference low

Scope-Appropriate Architecture

Right-size every architectural decision to the project's actual needs. Not every project needs hexagonal architecture, CQRS, or microservices.

Core principle: Detect the project tier first, then constrain all downstream pattern choices to that tier's complexity ceiling.


The 6 Project Tiers

TierLOC RatioArchitectureDBAuthTests
1. Interview/Take-home1.0-1.3xFlat files, no layersSQLite / JSONNone or basic8-15 focused
2. Hackathon/Prototype0.8-1.0xSingle file if possibleSQLite / in-memoryNoneZero
3. Startup/MVP1.0-1.5xMVC monolithManaged PostgresClerk/Supabase AuthHappy path + critical
4. Growth-stage1.5-2.0xModular monolithPostgres + RedisAuth serviceUnit + integration
5. Enterprise2.0-3.0xHexagonal/DDDPostgres + queuesOAuth2/SAMLFull pyramid
6. Open Source1.2-1.8xMinimal API surfaceConfigurableOptionalExhaustive public API

LOC Ratio = total lines / core business logic lines. Higher ratio = more infrastructure code relative to business value.


Auto-Detection Signals

SignalTier Indicator
README contains "take-home", "assignment", "interview"Tier 1
Time limit mentioned (e.g., "4 hours", "weekend")Tier 1-2
< 10 files, no CI, no DockerTier 1-2
.github/workflows/ presentTier 3+
package.json with 20+ dependenciesTier 3+
Kubernetes/Terraform files presentTier 4-5
CONTRIBUTING.md, CODE_OF_CONDUCT.mdTier 6
Monorepo with packages/ or apps/Tier 4-5

When confidence is low: Ask the user with AskUserQuestion.


Pattern Appropriateness Matrix

PatternInterviewHackathonMVPGrowthEnterprise
Repository patternOVERKILLOVERKILLBORDERLINEAPPROPRIATEREQUIRED
Event-driven archOVERKILLOVERKILLOVERKILLSELECTIVEAPPROPRIATE
DI containersOVERKILLOVERKILLLIGHT ONLYAPPROPRIATEREQUIRED
Separate DTO layersOVERKILLOVERKILL1 EXTRA2 LAYERSALL LAYERS
MicroservicesNEVERNEVERNEVEREXTRACT ONLYAPPROPRIATE
CQRSOVERKILLOVERKILLOVERKILLOVERKILLWHEN JUSTIFIED
Hexagonal architectureOVERKILLOVERKILLOVERKILLBORDERLINEAPPROPRIATE
DDD (bounded contexts)OVERKILLOVERKILLOVERKILLSELECTIVEAPPROPRIATE
Message queuesOVERKILLOVERKILLBORDERLINEAPPROPRIATEREQUIRED
API versioningSKIPSKIPURL prefixHeader-basedFull strategy
Error handlingtry/catchconsole.logError boundaryError serviceRFC 9457
Loggingconsole.lognoneStructured JSONCentralizedOpenTelemetry

Rule of thumb: If a pattern shows OVERKILL for the detected tier, do NOT use it. Suggest the simpler alternative instead.


Technology Quick-Reference by Tier

ChoiceInterviewHackathonMVPGrowthEnterprise
DatabaseSQLite / JSON fileIn-memory / SQLiteManaged PostgresPostgres + RedisPostgres + queues + cache
AuthHardcoded / noneNoneClerk / Supabase AuthAuth serviceOAuth2 / SAML / SSO
State mgmtuseStateuseStateZustand / ContextZustand + React QueryRedux / custom + cache
CSSInline / TailwindTailwindTailwindTailwind + design tokensDesign system
APIExpress routesSingle file handlerNext.js API routesFastAPI / ExpressGateway + services
DeploymentlocalhostVercel / RailwayVercel / RailwayDocker + managedK8s / ECS
CI/CDNoneNoneGitHub Actions basicMulti-stage pipelineFull pipeline + gates
MonitoringNoneNoneError tracking onlyAPM + logsFull observability stack

Build vs Buy Decision Tree (Tiers 1-3)

For Interview, Hackathon, and MVP tiers, always prefer buying over building:

CapabilityBUY (use SaaS)BUILD (only if)
AuthClerk, Supabase Auth, Auth0Core product IS auth
PaymentsStripeCore product IS payments
EmailResend, SendGridCore product IS email
File storageS3, Cloudflare R2Compliance requires on-prem
SearchAlgolia, Typesense Cloud> 10M docs or custom ranking
AnalyticsPostHog, MixpanelUnique data requirements

Time savings: Auth alone is 2-4 weeks build vs 2 hours integrate.


Upgrade Path

When a project grows beyond its current tier, upgrade incrementally:

Tier 2 (Prototype) → Tier 3 (MVP)
  Add: Postgres, basic auth, error boundaries, CI

Tier 3 (MVP) → Tier 4 (Growth)
  Add: Redis cache, background jobs, monitoring, module boundaries

Tier 4 (Growth) → Tier 5 (Enterprise)
  Add: DI, bounded contexts, message queues, full observability
  Extract: First microservice (only the proven bottleneck)

Key insight: You can always add complexity later. You cannot easily remove it.


When This Skill Activates

This skill is loaded by:

  • brainstorming Phase 0 (context discovery)
  • implement Step 0 (context discovery)
  • quality-gates YAGNI check
  • Any skill that needs to right-size a recommendation

The detected tier is passed as context to constrain downstream decisions.


  • ork:brainstorming - Uses tier detection in Phase 0 to constrain ideas
  • ork:implement - Uses tier detection in Step 0 to constrain architecture
  • ork:quality-gates - YAGNI gate references this skill's tier matrix
  • ork:architecture-patterns - Architecture validation (constrained by tier)

References


References (4)

Enterprise

Enterprise Guide (Tiers 4-5)

Guidance for growth-stage and enterprise production applications.

Tier 4: Growth-Stage

When You're Here

  • 4-15 developers on the codebase
  • $10K-$500K MRR
  • 10K-1M monthly active users
  • SLAs exist (99.5%+ uptime)
  • Compliance requirements emerging

Architecture: Modular Monolith

src/
├── modules/
│   ├── users/
│   │   ├── api/            # Module-scoped routes
│   │   ├── services/       # Business logic
│   │   ├── repository/     # Data access (NOW justified)
│   │   └── types/          # Module types
│   ├── orders/
│   │   ├── api/
│   │   ├── services/
│   │   ├── repository/
│   │   └── types/
│   └── shared/             # Cross-module utilities
├── infrastructure/
│   ├── database/           # Connection, migrations
│   ├── cache/              # Redis client
│   ├── queue/              # Background job client
│   └── monitoring/         # APM setup
└── config/                 # Environment-specific config

Patterns NOW Justified

PatternWhy Now
Repository patternMultiple data sources, testability matters
DI (light)Constructor injection for services, no container yet
Module boundariesTeam ownership, independent deployment later
Background jobsEmail, reports, data sync — can't block requests
Redis cacheDatabase bottlenecks are real and measured
Structured loggingDebugging across modules needs correlation

Patterns Still OVERKILL

PatternWhy Not Yet
MicroservicesMonolith handles the traffic, operational overhead isn't justified
CQRSRead/write patterns aren't divergent enough
Event sourcingAudit log column is sufficient
API gatewayOne service, one entry point
Service meshOne service, no mesh needed
Custom DI containerConstructor injection is sufficient

Database at Growth Stage

  • Primary: Postgres with connection pooling (PgBouncer or managed)
  • Cache: Redis for sessions, hot data, rate limiting
  • Background: Redis-backed queue (BullMQ, Celery)
  • Search: Postgres full-text or Typesense (if > 100K searchable records)

Testing Strategy

TypeCoverage Target
UnitCore business logic: 80%+
IntegrationAll API endpoints, all service methods
E2ECritical user journeys (5-10 flows)
PerformanceLoad test key endpoints (k6 or Artillery)
SecurityOWASP top 10 scan in CI

Tier 5: Enterprise

When You're Here

  • 15+ developers, multiple teams
  • $500K+ MRR or enterprise contracts
  • 1M+ monthly active users
  • Strict SLAs (99.9%+ uptime)
  • Compliance: SOC2, HIPAA, GDPR, or similar
  • Incidents cost real money

Architecture: Domain-Driven (Hexagonal)

src/
├── domains/
│   ├── identity/           # Bounded context
│   │   ├── application/    # Use cases, commands, queries
│   │   ├── domain/         # Entities, value objects, events
│   │   ├── infrastructure/ # Repos, adapters, external services
│   │   └── presentation/   # Controllers, DTOs, serializers
│   ├── billing/
│   │   └── ...
│   └── catalog/
│       └── ...
├── shared/
│   ├── kernel/             # Shared value objects, base classes
│   └── infrastructure/     # Cross-cutting: auth, logging, tracing
├── api-gateway/            # Route to domains
└── workers/                # Background processors per domain

Patterns NOW Justified

PatternJustification
Hexagonal / Clean ArchitectureTeam boundaries align with domain boundaries
DDD (bounded contexts)Complex domain logic requires explicit modeling
CQRSRead and write patterns have diverged significantly
Event-drivenCross-domain communication needs decoupling
API gatewayMultiple services, unified entry point
Full DI containerComplex dependency graphs across domains
RFC 9457 errorsExternal API consumers need structured errors
OpenTelemetryDistributed tracing across services

Justification Required

Even at enterprise scale, these patterns need specific justification:

PatternOnly When
Microservices extractionTeam can't deploy independently, proven bottleneck
Event sourcingRegulatory audit trail OR temporal query requirements
Saga patternMulti-service transactions that can't use 2PC
Service mesh (Istio)> 10 services with complex networking needs
Custom frameworkExisting frameworks demonstrably insufficient

Database at Enterprise Scale

  • Primary: Postgres with read replicas, connection pooling
  • Cache: Redis Cluster (HA) or Valkey
  • Queue: RabbitMQ or Kafka (based on throughput needs)
  • Search: Elasticsearch or OpenSearch (> 1M documents)
  • Analytics: Data warehouse (BigQuery, Snowflake, ClickHouse)

Monitoring & Observability

LayerTool
MetricsPrometheus + Grafana (or Datadog)
TracingOpenTelemetry + Jaeger (or Datadog APM)
LoggingStructured JSON → ELK or Loki
AlertingPagerDuty + Grafana alerts
Error trackingSentry with release tracking
UptimeSynthetic monitoring (Checkly, Datadog)
SLO/SLIError budget dashboards

Testing Strategy

TypeCoverage Target
Unit80%+ for domain logic
IntegrationAll service boundaries
ContractAPI contracts between services
E2ECritical business flows
PerformanceLoad + stress + soak testing
SecuritySAST + DAST + dependency audit
ChaosFailure injection (Chaos Monkey / Litmus)

Interview Takehome

Interview & Take-Home Guide (Tiers 1-2)

Guidance for interview assignments, take-home projects, hackathons, and prototypes.

Tier 1: Interview / Take-Home

Target Metrics

MetricTargetRed Flag
Files8-15> 25
LOC200-600> 1,500
Tests8-15 focused> 40
Dependencies3-8> 15
Layers1-2> 3
Config files2-3> 8

What Interviewers Actually Evaluate

  1. Clean, readable code — not architectural patterns
  2. Working solution — not infrastructure
  3. Good naming and structure — not abstractions
  4. Thoughtful trade-offs — documented, not implemented
  5. Tests for critical paths — not 100% coverage

Architecture Pattern

src/
├── app.ts              # Entry point + routes
├── handlers/           # Request handlers (thin)
├── services/           # Business logic (1-2 files)
├── types.ts            # Shared types
└── __tests__/          # Co-located tests

No repository pattern. No DI. No separate DTO layers. No middleware chain.

Highest-Leverage Technique

Add a "What I Would Change for Production" section to README:

## What I Would Change for Production

- **Database**: Replace SQLite with Postgres + connection pooling
- **Auth**: Integrate Clerk/Auth0 instead of basic token
- **Error handling**: Add structured error responses (RFC 9457)
- **Monitoring**: Add OpenTelemetry tracing
- **Testing**: Add integration tests with testcontainers
- **CI/CD**: Add GitHub Actions with lint, test, build stages

This shows awareness WITHOUT building it. Interviewers value judgment over implementation.

Common Over-Engineering Mistakes

MistakeWhy It Hurts
Hexagonal architecture3x more files, evaluator can't find the logic
Docker + docker-composeAdds setup friction, not required
OpenAPI spec generationTime spent on tooling, not business logic
Custom error hierarchy5 error classes for 3 endpoints
Event-driven patternsAsync complexity for sync workflows
Repository + Unit of Work4 files to wrap a 2-line query

What TO Build

  • Clear input validation with helpful error messages
  • One integration test that proves the happy path works
  • A few unit tests for edge cases in business logic
  • Clean README with setup instructions (< 5 steps)

Tier 2: Hackathon / Prototype

Target Metrics

MetricTargetRed Flag
Files1-5> 10
LOC50-300> 800
Tests0Any
Time to demo< 4 hours> 8 hours

Architecture Pattern

Single file if possible. Maximum one level of extraction.

app.ts          # Everything
# OR
app.ts          # Routes + handlers
db.ts           # Data access

Principles

  • Ship the demo. Nothing else matters.
  • Hardcode everything. Config is waste.
  • No tests. Prototype will be thrown away.
  • Use the highest-level abstractions available. ORMs, UI kits, SaaS APIs.
  • Copy-paste is fine. DRY is for production code.

Technology Choices

  • Framework: Whatever you know best
  • Database: SQLite, JSON file, or in-memory
  • Deployment: Vercel, Railway, or localhost
  • Auth: Hardcoded user or none
  • UI: Tailwind + shadcn/ui (fastest to good-looking)

Open Source

Open Source Guide (Tier 6)

Guidance for open-source libraries, frameworks, and tools.

Target Metrics

MetricTargetRed Flag
Public API surfaceMinimalExposing internals
LOC ratio1.2-1.8x> 2.5x (over-abstracted)
Test coverage (public API)95%+< 80%
Test coverage (internals)60%+< 40%
DependenciesMinimal> 10 runtime deps
Breaking changes per major< 5> 15

Architecture Principles

1. Minimal API Surface

Expose the minimum necessary. Everything public becomes a contract.

// Good: Small, focused API
export { createClient } from "./client";
export type { ClientOptions, Client } from "./types";

// Bad: Leaking internals
export { createClient, _parseResponse, _buildUrl, _retryWithBackoff } from "./client";

2. Zero or Minimal Dependencies

Every dependency is a liability for consumers:

  • Security vulnerabilities propagate
  • Version conflicts with consumer's dependencies
  • Bundle size increases
  • Maintenance burden when deps are abandoned

Prefer: Vendoring small utilities over adding dependencies.

3. Backwards Compatibility

  • Semantic versioning is non-negotiable
  • Deprecate before removing (minimum 1 minor version)
  • Migration guides for every breaking change
  • Codemods when feasible (like Next.js does)

Testing Strategy

TypeFocus
Unit testsEvery public API method, every edge case
IntegrationCommon usage patterns from README examples
CompatibilityTest against multiple Node/Python/runtime versions
Type testsVerify TypeScript types work correctly (tsd, expect-type)
SnapshotAPI surface snapshot to catch accidental breaks
PerformanceBenchmark critical paths, regression testing

Test What Matters

// Test public API behavior, not implementation details
test("createClient returns working client", () => {
  const client = createClient({ apiKey: "test" });
  expect(client.query).toBeDefined();
  expect(typeof client.query).toBe("function");
});

// Test edge cases consumers will hit
test("createClient throws on missing apiKey", () => {
  expect(() => createClient({})).toThrow("apiKey is required");
});

Documentation

DocumentPurposePriority
README.mdQuick start, installation, basic usageCRITICAL
API referenceEvery public method with examplesHIGH
CONTRIBUTING.mdHow to contribute, dev setupHIGH
CHANGELOG.mdEvery version's changesHIGH
Migration guideUpgrade path between majorsHIGH (per major)
Architecture docInternal design for contributorsMEDIUM

What Makes Open Source Different

ConcernProduct CodeOpen Source
API designInternal, change freelyPublic contract, break carefully
DependenciesAdd what's usefulMinimize ruthlessly
TestingTest business flowsTest every public API edge case
DocsInternal wikiPublic, polished, with examples
Error messagesLog and fixDescriptive — user can't see your code
TypesNice to haveEssential — API discoverability
Bundle sizeLess criticalCritical for frontend consumers
Node versionsPick oneSupport multiple (LTS at minimum)

Common Mistakes

MistakeImpact
Exposing too many internals as public APICan never remove them
Heavy runtime dependenciesConflicts + bloat for consumers
Not testing edge casesUsers find bugs, lose trust
Poor error messagesUsers can't self-diagnose
No migration guide between versionsUsers stay on old versions
Monolithic packageUsers import everything for one feature

Package Structure Decisions

DecisionSmall LibraryFramework
Single packageYesNo — use monorepo
Tree-shakeableEssentialEssential
ESM + CJSBoth via dual exportsBoth via dual exports
Subpath exportsIf > 3 featuresYes — pkg/feature
Plugin systemNoYes — extensibility

Startup Mvp

Startup & MVP Guide (Tier 3)

Guidance for MVPs, early-stage startups, and small production applications.

Target Metrics

MetricTargetRed Flag
Files20-60> 120
LOC2,000-8,000> 15,000
TestsHappy path + critical edges> 200 tests
Dependencies10-25> 50
Deploy time< 10 min> 30 min
Time to first user2-6 weeks> 12 weeks

Architecture: MVC Monolith

src/
├── app/                    # Next.js App Router pages
│   ├── api/                # API routes (thin handlers)
│   ├── (auth)/             # Auth-gated pages
│   └── (public)/           # Public pages
├── lib/
│   ├── db.ts               # Database client (Drizzle/Prisma)
│   ├── auth.ts             # Auth config (Clerk/Supabase)
│   └── email.ts            # Email client (Resend)
├── components/             # React components
├── actions/                # Server actions (business logic)
└── types/                  # Shared types

Key Principles

  1. Monolith first. Always. No exceptions.
  2. Managed services. Database, auth, email, storage — all SaaS.
  3. One deployment target. Vercel OR Railway, not both.
  4. Feature flags over branches. Ship incomplete features behind flags.
  5. Server actions over API routes. Less boilerplate, same safety.

Build vs Buy at MVP Scale

DecisionRecommendationTime Saved
AuthBUY: Clerk (2h) vs BUILD: JWT + sessions (2-4w)2-4 weeks
PaymentsBUY: Stripe Checkout (4h) vs BUILD: custom (4-8w)4-8 weeks
EmailBUY: Resend (1h) vs BUILD: SMTP + templates (1-2w)1-2 weeks
File uploadBUY: UploadThing/S3 (2h) vs BUILD: custom (1-2w)1-2 weeks
SearchBUY: Postgres full-text (0h) vs BUILD: Elasticsearch (2-4w)2-4 weeks
AnalyticsBUY: PostHog (1h) vs BUILD: custom (2-4w)2-4 weeks

Total potential savings: 12-24 weeks by choosing BUY for non-core features.

Database Decisions

  • Default choice: Managed Postgres (Supabase, Neon, Railway)
  • ORM: Drizzle (type-safe, lightweight) or Prisma (broader ecosystem)
  • Migrations: ORM-managed, not manual SQL
  • Caching: None initially. Add Redis only after measuring bottlenecks.

What NOT to Do

  • No read replicas (you don't have the traffic)
  • No database-per-service (you have one service)
  • No custom connection pooling (managed service handles it)
  • No event sourcing (your audit needs are met by updated_at columns)

Testing Strategy

TypeCoveragePriority
Unit testsBusiness logic functionsHIGH
IntegrationAPI routes / server actionsHIGH
E2ECritical user flows (signup, purchase)MEDIUM
PerformanceNone yetLOW

Rule: Test the user flows that lose you money if broken. Skip everything else.

Error Handling

// MVP-appropriate error handling
try {
  const result = await createOrder(data);
  return { success: true, data: result };
} catch (error) {
  console.error("Order creation failed:", error);
  return { success: false, error: "Failed to create order" };
}

No custom error hierarchies. No error codes. No RFC 9457. Log it, return a message, move on.

Deployment

  • Platform: Vercel (frontend-heavy) or Railway (backend-heavy)
  • CI/CD: GitHub Actions — lint + test + deploy (3 steps max)
  • Environments: Production + Preview (Vercel auto). No staging.
  • Monitoring: Error tracking (Sentry free tier) + uptime (Better Stack free)

When to Upgrade to Tier 4

Upgrade when you have evidence, not speculation:

SignalAction
Response times > 500ms consistentlyAdd caching layer
Database CPU > 60% sustainedAdd read replica or optimize queries
Team > 3 developers on same codebaseExtract module boundaries
Deployment frequency > 5x/dayAdd staging environment
Revenue > $10K MRRInvest in monitoring + reliability
Edit on GitHub

Last updated on