The Software Behind Every Nimble Project

In our first article, we explained why we built the Nimble Attainability Index — a data-driven approach to finding where housing supply doesn't match what people can actually afford. That article was about the what and the why.
This one is about the how.
Over the past year, we've built a technology stack that connects market intelligence, deal evaluation, construction management, accounting, CRM, and content operations into a single feedback loop. None of it existed when we started. All of it was built using AI as both architect and builder.
Here's a comprehensive inventory of what we've built, what we learned, and what we'd share with any builder willing to try.
Operation Alpha: The Intelligence Layer
Everything starts here. Operation Alpha is the nervous system that connects all of our platforms. It's a Python-based automation system built on Claude Code (Anthropic's AI coding assistant) with a FastAPI daemon, seven specialized data servers, a persistent memory system, and a library of autonomous agents.
Think of it as a personal operating system for running a business — except it learns, remembers, and acts on what it knows.
The Daemon (Always-On Service)
A FastAPI application that runs continuously — locally during the day, and on Google Cloud Run 24/7. It manages:
- 14 scheduled jobs — Morning briefs, market alert monitoring, email triage, task reminders, overnight data syncs, self-improvement cycles, news aggregation
- Event-driven actions — Market threshold alerts, git commit detection, file change monitoring, webhook processing
- Voice interface — Groq Whisper for transcription, ElevenLabs for text-to-speech
- Google Chat integration — 34 slash commands for status, CRM, market data, follow-ups, AI council deliberations, and schedule adjustments
- Initiative engine — Proactively identifies what needs attention based on goals, calendar, tasks, conversations, and market conditions
The overnight batch alone runs a six-step sequence starting at 2:30 AM: verify backups, sync Redfin housing data, sync FRED economic data, prune stale memories, scan logs for anomalies, and pre-stage the morning briefing data. By the time we sit down, the system has already processed the day's context.
Autonomous Agents
We built a lightweight agent framework with three tiers:
- Simple agents for single-message decisions — quick classification, routing, triage
- Autonomous agents for multi-iteration tool loops with error recovery (up to 10 iterations)
- Specialized agents for specific domains: deal enrichment and scoring, deal alerts, web scraping, form filling, monitoring, context recovery, and task execution
Each agent has a defined lifecycle: initialize, build system prompt, select tools, think, execute, report. An approval manager provides human-in-the-loop for critical actions — the system can't spend money or post publicly without explicit approval.
7 MCP Data Servers
MCP (Model Context Protocol) is Anthropic's standard for giving AI models access to external data. We built seven specialized servers:
- Market Data — Connects to FRED (Federal Reserve Economic Data) for 14 economic series and Redfin Data Center for housing inventory, days on market, median prices, and months of supply by region.
- PostgreSQL Data Lake — Central market database with housing history, economic indicators, property records, and building permits. Computes composite market signals (GREEN / YELLOW / RED) from multiple data sources.
- Property Data — Census Bureau building permits by county, plus parcel lookup for Stanislaus and San Joaquin counties via county GIS systems.
- Roadmap Tracker — Task management with time logging, milestone tracking, variance reporting (estimated vs. actual hours), and daily snapshots for burndown analysis.
- Context Graph — Decision trace capture using PostgreSQL with Apache AGE graph extensions. Every significant decision is recorded with inputs, reasoning, outcome, confidence, and links to precedent decisions.
- Memory — Exposes our persistent memory database as MCP tools for recall, search, and storage.
- Browser Automation — Playwright-based web automation for scraping, form filling, and testing.
40+ Scripts
The scripts directory is where operational knowledge lives as executable code:
- Daily operations — Automated morning startup (Docker, email, calendar, task sync, CRM sync, dashboard launch), progress tracking, session finalization with time logging and memory capture
- Analysis — Weekly time-value analysis that classifies every calendar event into $10/hr through $5,000/hr tiers, estimation variance analysis, session reflection engine
- Accounting — Bookkeeping agent for ERPNext journal entries, bank reconciliation, property-specific P&L, CPA-ready exports
- Marketing — Content generation with 7 defined pillars, newsletter atomization, LinkedIn scheduling, CRM follow-up checking
- Memory maintenance — Learning extraction, decision capture, precedent suggestion, memory consolidation via cosine similarity
The Memory Architecture (Three Layers)
One of the most impactful things we built is a three-layer memory system. The insight: an AI assistant that forgets everything between sessions is fundamentally limited. We wanted a system that compounds knowledge over time.
Layer 1: File-Based Context (Obsidian Vault)
An Obsidian vault serves as the strategic knowledge base — a living strategy document, daily operational logs, session notes, project documentation, rules, rubrics, and principles. This is the "constitution" that Claude reads at the start of every session.
The vault follows a strict privacy model: operational logs are accessible to Claude, but financial data, deal terms, and CRM contacts are restricted to our local LLM running on-device. The principle: Claude sees the "how", the local model sees the "what."
Layer 2: Memory Database (Semantic Embeddings)
1,700+ discrete memories stored with full-text search and semantic vector embeddings. Six memory types: facts, corrections, learnings, decisions, preferences, and active context.
Each memory has a confidence score that decays over time but gets boosted by access. Old memories that are never referenced naturally fade. Memories that prove useful get reinforced.
The semantic search layer means we can query by meaning, not just keywords. "How should I handle folders?" finds "Use pathlib for file system operations" even though the words don't overlap.
Layer 3: Context Graph (Decision Traces)
A graph database that captures decision traces. Every significant decision is recorded as a node with edges to the entities it affects, the policies it follows, and the precedent decisions it references.
This lets us query: "What past deal evaluations had similar inputs?" or "What architecture decisions affected this component?" The graph compounds — each decision makes the next one more informed.
Nimble Dashboard: Operational Intelligence
The dashboard is where Operation Alpha's data becomes actionable. It's a React + FastAPI application with seven integrated views:
- Financial Health — Scoreboard with deals closed, profit vs. target, reserve health. Real-time cash position and loan balances from ERPNext. 90-day rolling cash flow forecast with critical payment dates. Project profitability by property. Risk alerts for low cash, overdue payments, off-track deals.
- Market Intelligence — NAI scores for all target cities, opportunity ranking, city drill-down with 9-component breakdown, 12-month permit pipeline by county.
- Deal Pipeline — Kanban board from scored through budgeted. 6-component deal scoring with market-adjusted bias (RED market adds 15% to construction costs, reduces ARV by 20%). Auto-generated plain-English narratives for every scored deal.
- Budget Generation — CSI-code templates for Fix & Flip, ADU, New Build, Buy-to-Rent, and GC Service. Historical cost distribution based on actual project data, not national averages.
- Strategic Planning — Mission Control parser, roadmap with milestones and Gantt chart, variance dashboard for estimated vs. actual hours.
- AI Council — On-demand multi-perspective deliberation for significant decisions.
- 39 React components and 76 API endpoints total.
ERPNext: Accounting as Our First AI Build
We chose accounting as our first AI-assisted build because of a core principle: Cash Is Oxygen. If you can't see your cash position clearly, nothing else matters. We needed real-time visibility into property-level P&L, loan balances, and cash flow — not a quarterly summary from a bookkeeper.
ERPNext is an open-source ERP system running in Docker. Using Claude Code, we installed and configured it, built a custom workspace dashboard, created cost centers for every property, auto-coded 188 bank transactions (117 correctly categorized, 70 correctly skipped, no false positives), and built reconciliation workflows.
Three lessons that apply beyond accounting:
-
Immutable ledger is non-negotiable. You reverse and recreate; you never edit submitted entries. This felt painful at first but prevents the kind of silent corrections that compound into chaos at year-end.
-
Cost Centers = Properties. Every property is a cost center, which means every dollar flows to a specific project. This single architectural decision gave us granular P&L by property from day one.
-
Teaching mode is mandatory. Every journal entry we create comes with a plain-English explanation of why — the accounting principle, the impact, and the pattern for next time. This isn't just documentation; it's how the AI learns to handle similar entries without asking.
FluidCM: Construction Management
Our construction management platform — a FastAPI backend with a Flutter mobile app. It manages project timelines, daily construction logs, RFIs, submittals, contracts, change orders, and budget line items.
The mobile app is offline-first — field crews can log entries, take photos, and submit RFIs without cell service. Data syncs when connectivity returns.
With 617 API routes and 50 backend test suites, FluidCM is the largest codebase in our stack. The FluidCM-to-Dashboard bridge means a scored deal can flow from evaluation through budget generation to project creation with populated budget lines in a single workflow.
RealDeal & CRM: The Deal Flow Pipeline
RealDeal is a multi-platform property analysis tool (React + Flutter) that evaluates 5 investment strategies per property: Fix & Flip, Add-On, ADU, New Build, and Rental. When a property is saved in RealDeal, it can be imported, scored, and tracked through our full pipeline.
The CRM tracks how contacts enter our ecosystem — website visits, calculator usage, content engagement — and flags contacts needing follow-up. Gmail sync automatically adds email senders. Voice notes mentioning people trigger contact updates.
The Public-Facing Tools
Housing Attainability Calculator
Free, public, available at nimbledev.llc/tools/attainability-calculator. Select any US metro and see a composite Nimble Attainability Index score based on 9 components:
- Income-Price Gap (20%)
- Supply Pressure (15%)
- FHA Headroom (10%)
- Absorption Rate (10%)
- Rent-Price Ratio (10%)
- FHA Utilization (5%)
- Building Permit Pipeline (10%)
- Market Health (10%)
- Price Momentum (10%)
Data sources: Census ACS, Redfin, FRED, HUD, Census Building Permits Survey. All public. All updated regularly. The methodology has been stress-tested against historical data.
nimbledev.llc
This website. Next.js on Vercel. Includes the calculator, thought leadership articles, portfolio, and investor resources.
By The Numbers
| Platform | Metric | Count | |----------|--------|------:| | Operation Alpha | Python scripts | 43 | | Operation Alpha | Scheduled jobs | 14 | | Operation Alpha | GChat commands | 34 | | Operation Alpha | Autonomous agents | 10 | | Operation Alpha | MCP data servers | 7 | | Operation Alpha | Persistent memories | 1,789 | | Nimble Dashboard | React components | 39 | | Nimble Dashboard | API endpoints | 76 | | FluidCM | API routes | 617 | | FluidCM | Backend tests | 50 | | nimbledev.llc | Published articles | 10 |
6 platforms. 890+ tools and integrations. Auto-counted on March 03, 2026.
These numbers are generated by a script that scans every codebase. They update as we build. The total excludes the 1,700+ memory entries, which are knowledge, not tools — though they might be the most valuable asset in the entire stack.
What We'd Tell Another Builder
If you're a small builder or developer thinking about incorporating AI into your operations, here's what we've learned:
-
Start with accounting. Not because it's exciting, but because financial clarity is the foundation for every other decision. If you can't see your cash position by property in real time, you're guessing.
-
Use public data first. Five of our nine attainability metrics come from free public sources. You don't need an MLS subscription to start making data-driven decisions.
-
Build the feedback loop early. The value isn't in any single tool — it's in the connections between them. A scored deal that flows into a budget that flows into a project that feeds back into your cost models. That loop is what compounds.
-
Document everything. Not for posterity — for your future self and your AI. Every correction, every lesson, every decision with reasoning. This is the raw material that makes AI actually useful over time.
-
Separate the "how" from the "what." Keep sensitive data (financials, deal terms, contacts) on local systems. Let cloud AI help you build frameworks and patterns. This isn't paranoia — it's good data governance.
-
Don't wait for perfect. Our first ERPNext setup had broken account trees, miscategorized transactions, and a dashboard that only showed two charts. We fixed it incrementally. The system that exists and improves beats the perfect system that never ships.
The Philosophy
We're not a tech company that happens to build houses. We're a building company that treats data as a competitive advantage.
Every project we complete feeds back into a system that gets smarter — cost models sharpen, market signals refine, estimation accuracy improves. A small team operating this way can make decisions with the speed and confidence of much larger organizations.
We share this openly because the attainability problem doesn't get solved by one company. It gets solved by hundreds of builders in hundreds of markets making better decisions about what to build and where. If our tools, our data, or our process helps even one other builder target the gap instead of the luxury segment, that's a win for the communities we all serve.
The Attainability Calculator is free. The data is public. The approach is documented here. What you do with it is up to you.
Donald Speedie is founder of Nimble Development LLC, an AI-first real estate development and construction company based in Northern California. Questions, feedback, or partnership inquiries: info@nimbledev.llc.