From 3 Months to 3 Days: The SynCORE Rebuild
March 2026 | 6 min read
We spent 3 months building an email triage system in Google Apps Script. Then we rebuilt it in 3 days. This is not a story about throwing away bad code. It is a story about what happens when methodology compounds faster than technology.
The Google Apps Script Era: October 2025 – January 2026
It started the way most internal tools start: a specific pain point and a fast prototyping environment. Emails were piling up across multiple inboxes. Quotes, purchase orders, payables, general inquiries — all hitting the same accounts with no reliable classification. Manual sorting was burning hours every week. So we built a system.
Google Apps Script was the natural choice. It lived inside the ecosystem. No infrastructure to provision. No deployment pipeline to build. Just open the script editor and start writing functions.
Over three months, the system grew to hundreds of GAS routines. We had AI — specifically Google's Gemini — writing code while humans debugged and refined it. The active triage codebase exceeded 95KB of production logic. Multiple versions shipped, each one patching edge cases the previous one missed.
And it worked. Six-folder classification. Keyword scoring with weighted confidence. Fuzzy sender matching that could resolve "Johnson Controls" from "JCI," "J.C.I.," and "johnson controls international" to the same entity. Priority routing based on deal value and response urgency.
It was, honestly, one of the best systems we ever built. The architecture was sound. The classification accuracy was above 90%. The problem was everything underneath it.
Why It Hit a Wall
Google Apps Script runs in a sandbox with hard constraints. Execution time caps at 6 minutes per run. API rate limits throttle Gmail access unpredictably. There is no local data persistence — every run starts cold, pulling state from spreadsheets or properties stores that were never designed to be databases.
Worse, there is no cross-service orchestration. Want to classify an email, update a CRM, and log the action to a database? That is three separate execution contexts with three separate failure modes, chained together by triggers that fire on Google's schedule, not yours.
Cloud dependency meant cloud fragility. We were building a mission-critical system on a platform that treated us as a hobby user. Every scaling decision was a workaround. Every workaround added complexity. Every bit of complexity made the next feature harder to ship.
The Innovation Hiding Inside the Constraints
Here is what most people miss about that first build: the AI was not just writing the code. It was embedded in the workflow itself, solving fuzzy data problems that deterministic logic cannot touch.
Ambiguous sender with no match in the contacts table? AI classifies them. Inconsistent subject line that could be a quote request or a service inquiry? AI routes it. New vendor email format that no rule has ever seen? AI handles the first encounter, and the system learns the pattern for next time.
The novel insight was not "use AI for email." That is obvious. The insight was: use AI at the edges to create deterministic rules for the center.
AI handled the unknowns. Once an unknown became known, it graduated into a rule — fast, cheap, and predictable. The center of the system was pure logic: keyword scoring, sender fingerprinting, folder routing. AI only fired when the rules ran out. This kept costs low, latency down, and accuracy high.
The 3-Day Rebuild
Same architectural vision. Same classification methodology. Same "AI at the edges, rules at the center" philosophy. Completely different execution.
Claude Code was the builder. The stack moved to Python, SQLite, and n8n running on a Mac mini. Local-first. No cloud dependency. No execution time limits. No rate throttling on our own inbox.
The rules engine came in at roughly 300 lines of Python. It handles 90% of incoming email — classification, routing, logging — without ever calling an AI model. Only the genuinely ambiguous messages, the true unknowns, get escalated to AI for classification. And when AI resolves an unknown, the resolution feeds back into the rules engine as a new deterministic pattern.
SQLite replaced spreadsheets as the state layer. WAL mode. Instant reads. Persistent sender fingerprints, routing history, and classification confidence scores that survive between runs and compound over time.
n8n replaced GAS triggers as the orchestration layer. One workflow polls the inbox, hands messages to the Python engine, and routes the results — all in a single execution context with full error handling. No trigger chains. No race conditions. No mystery failures at 3 AM.
The Lesson: Methodology Compounds
The rebuild was 30x faster. Not because the AI was smarter. Not because Python is inherently better than Apps Script. Not because we had more people or better hardware.
It was faster because we were smarter.
Three months of GAS development taught us the problem space at a molecular level. We knew which edge cases mattered and which ones were noise. We knew the classification hierarchy cold. We knew exactly where AI added value and where deterministic rules were faster, cheaper, and more reliable.
The first build was the learning. The second build was the application. The methodology — the architecture, the decision patterns, the failure modes — had already been paid for. All we needed was a faster execution engine to apply it. Claude Code was that engine.
What This Means for AI-Assisted Development
The popular narrative around AI development tools focuses on replacement. AI writes the code, the developer becomes obsolete, costs go to zero. That narrative misunderstands what actually happened here.
AI did not replace the developer. AI compounded the developer's learning. The human provided the direction — the architecture, the methodology, the judgment calls about what to automate and what to leave manual. The AI provided the speed — turning decisions into running code in minutes instead of days.
Each build makes the next build faster. Our Builder Orchestrator proves it with an 89% first-pass rate. Not linearly — exponentially. Because the methodology survives. The code is disposable. The architecture decisions, the edge case catalog, the understanding of where AI adds value versus where rules are sufficient — that knowledge transfers at zero cost to the next implementation.
Three months to three days is not the ceiling. It is the starting point. The system we built in January is already being extended with capabilities that would have been architectural fantasies during the GAS era. Local-first data persistence. Cross-service orchestration. AI-human feedback loops that make the rules engine smarter with every email it processes. The foundation was always the methodology. The AI just removed the bottleneck on applying it.
Read the full technical evolution of SynCORE, or talk to us about building AI-augmented operational systems.