How I Built an Agent That Publishes 20 Content Pieces a Day (Without Touching Them)
A few weeks ago I asked myself an uncomfortable question:
*How much of my time as a developer-entrepreneur am I spending on tasks an agent could do better than me?*
The answer, for conversoriaecnae.es, was: a lot.
The tool covers 2,000+ CNAE and IAE codes that need content. Educational articles, guides, newsletters, social posts. Manual production would take years. So I built a system to do it autonomously.
Here's what I learned.
---
The Real Problem: Not Generating Content — Controlling It
Anyone can loop Claude through article generation. The problem is that without guardrails, you end up with mediocre content, titles that get truncated in Google, articles that contradict what you published yesterday, or posts that say "this year 2024" when we're in 2026.
The challenge isn't generation. It's production reliability.
That's why what I built isn't "an agent that writes." It's an orchestration of 6 specialized agents coordinated by Vercel cron jobs, each with a clear and bounded responsibility.
---
The Architecture: 6 Agents, 5 Daily Pipelines
The system runs 5 times daily. Here's what happens each run:
1. Filter Agent — Monitors Google Alerts via Gmail, scores each source 0-100 for CNAE/IAE relevance, and queues the best ones. Uses Claude Haiku 4.5 (not Sonnet) because it processes in batches of 10 and volume is high. Haiku is more than enough for structured scoring.
2. Planner Agent — Selects 4 optimal daily topics: 2 educational codes and 2 news articles. It does this based on keyword opportunity scores, the editorial calendar, and coverage gaps. Logic is simple: prioritize high-search-volume codes that don't yet have content.
3. Educational Agent — Generates full 800-1200 word guides for each CNAE or IAE code. Here's where it gets interesting.
4. Publisher Agent — Converts markdown to Sanity Portable Text. Not HTML, not plain text. Native Sanity format with proper blocks, extracted FAQs, internal links to related codes, and Unsplash image asset management.
5. Newsletter Agent — Generates weekly summaries with A/B testing for subject lines built in.
6. Statistics Agent — Weekly pipeline with charts via QuickChart.io, Puppeteer-generated infographics, and distribution across 6 platforms: LinkedIn, Twitter/X, Facebook, Threads, Instagram, Reddit.
---
The Decision That Changed Everything: Auto-Publish with Quality Score
This was the most important decision in the project, and the one that took me longest to commit to.
The educational agent scores every article against an 8-point checklist before publishing. If the score is ≥80, it publishes automatically. No manual review.
Why does this work? Because the scoring is deterministic, not subjective. The checklist evaluates specific, verifiable things:
- Title between 40-70 characters (SERP truncation)
- Meta description between 50-60 words
- Description between 140-160 characters
- FAQ section present
- Internal links to related codes
- Cover image with proper attribution
- No outdated temporal references
- Correct header structure
Nothing like "is this good content?" Objective metrics.
The result: most articles pass the threshold on the first try. The ones that don't sit in a queue for manual review. In practice, I almost never have to review anything.
---
The Technical Problem That Almost Broke Me: Vercel and the 300 Seconds
Vercel has a 300-second limit on serverless functions. With 4 topics per pipeline and each one requiring multiple steps (generate, score, convert to Portable Text, publish to Sanity, distribute across social), sequential processing was impossible.
The fix: parallel batch processing with 2 concurrent batches of 10 items.
The commit that solved it is [f82f3b2](https://github.com/brianMena/conversor-iae-cnae-v3), from February 15, 2026. It took longer than I'd like to get there, but after the fix the full pipeline completes within the limit.
The key was processing all 4 topics in parallel instead of sequentially, with unique correlation IDs per topic for the audit log. If one topic fails, it doesn't block the rest.
```typescript // Before: sequential, guaranteed timeout for (const topic of topics) { await processTopic(topic); }
// After: parallel, within Vercel limits await Promise.all( topics.map(topic => processTopic(topic).then(result => logAudit(topic.correlation_id, result) ) ) ); ```
---
The Real Savings: Prompt Caching
Without getting into specific figures, the system runs 5 times a day with 6 agents. Token costs without optimization would be unsustainable.
The solution: prompt caching on all agents with 4096+ token system prompts.
The educational agent's system prompt is long: it includes full CNAE/IAE taxonomy context, formatting rules, SEO guidelines, examples of well-scored articles, and the Spanish fiscal calendar for content contextualization. Caching that block saves roughly 80% of input tokens on repeated calls.
For a system making hundreds of daily calls, that's the difference between profitable and not.
---
The Part Nobody Implements: Temporal Context
This was the last important commit before writing this post ([496b64e](https://github.com/brianMena/conversor-iae-cnae-v3), February 16, 2026): dynamic temporal context injection into all agents.
The problem: LLMs tend to write as if time were frozen. Without explicit context, an agent might mention last year's dates, ignore the current fiscal quarter, or write about the "upcoming" modelo 303 deadline that already passed.
The fix is simple but critical: every call to every agent includes a dynamically built temporal context block:
```typescript function buildTemporalContext(): string { const now = new Date(); const quarter = Math.ceil((now.getMonth() + 1) / 3);
return ` TEMPORAL CONTEXT (use this, not your training data): - Current date: ${now.toISOString().split('T')[0]} - Fiscal year: 2026 - Quarter: Q${quarter} - Season: ${getSeason(now)} - Upcoming relevant fiscal dates: ${getUpcomingFiscalDates()} `; } ```
Since that commit: zero outdated references in production.
---
What to Build If You Want to Replicate This
What I have now wasn't built all at once. I added it in layers:
Layer 1 — filtering + planning: Define what to publish before thinking about how to generate it. A planner that picks bad topics makes everything else noise.
Layer 2 — generation + quality: Implement quality scoring from day one. Not as an extra feature — as the publish gate. This is what separates a toy system from a production one.
Layer 3 — distribution: Platform-specific agents, not a generic agent that adapts. LinkedIn and Twitter have such different dynamics that a single agent will always underperform on one of them.
Layer 4 — temporal context: Inject it into every call. Don't assume it. Models don't know what day it is.
---
The Current Result
The agent autonomously publishes educational articles on CNAE and IAE codes, distributes across 6 platforms, generates weekly newsletters with A/B-tested subject lines, and produces statistical reports with infographics.
I'm still in the loop for strategic decisions and reviewing articles that don't pass the quality threshold. But the daily content flow no longer depends on my time.
That's what real leverage with code looks like.
---
Building something similar? Tell me which part of the stack is giving you the most trouble.
