I Automated My Social Media Content Pipeline with n8n and LLMs
I Automated My Social Media Content Pipeline with n8n and LLMs
I've always been bad at consistently posting on social media. Not because I don't have things to say, but because the process of finding something interesting, writing a thoughtful post about it, and scheduling it - multiplied by every platform - is the kind of repetitive work that makes my eyes glaze over.
So I automated it. Here's how.
The idea
The pipeline is simple in concept: fetch interesting articles from RSS feeds, feed them to an LLM with some instructions, get back a well-written social media post, and schedule it. The whole thing runs every morning at 8am without me touching anything.
In practice, getting this to actually produce good content (not just content) took more iteration than I expected.
The n8n workflow
The workflow has a few stages:
Schedule Trigger → RSS Fetch → Filter → LLM Generation → Scheduling
The trigger fires daily. An HTTP Request node grabs the latest articles from a few RSS feeds I follow (tech blogs, HN, specific topics I care about). A Code node filters them - I wrote some basic relevance scoring based on keywords so it doesn't try to turn every random article into a LinkedIn post.
The interesting part is the LLM node. I send the article summaries to the API with a carefully crafted prompt. And by "carefully crafted" I mean I rewrote it about fifteen times before the output stopped sounding like a corporate press release.
The key was being specific about tone. My prompt basically says: "Write this like a real person who has opinions, not like a brand account. Start with something that makes people stop scrolling. Keep it under 200 words. No cringe motivational energy."
It took a while, but the output now sounds close enough to how I actually write that I'm only mildly uncomfortable about it.
The parts that were tricky
Filtering is important. Without it, the LLM happily generates posts about articles that are irrelevant, niche, or boring. Garbage in, garbage out. I spend more time tuning the filter logic than anything else.
LLM output is inconsistent. Sometimes the post is great. Sometimes it's generic. I added a second LLM call that acts as a quality check - basically asking it "would you engage with this post? Yes or no." If it says no, the post gets dropped. It's crude, but it cuts the worst output.
Scheduling across platforms is annoying. Each platform has its own API, character limits, and formatting quirks. I ended up using Buffer's API as a single endpoint, which simplified things a lot.
Was it worth it?
Honestly, yes. I went from posting once a month (when I remembered) to having consistent, relevant content going out daily. It's not perfect - I still manually post things I really care about - but for the steady drip of "hey, this article is interesting, here's my take," it works great.
The whole thing runs on my self-hosted n8n instance, costs me nothing beyond the LLM API calls (which are pennies per post), and saves me maybe 3-4 hours a week. That's a good trade.