How I Helped Ship a Reimbursement API to Land Google
There's a specific kind of anxiety that comes with knowing Google is the target customer for the thing you're building. It's not the normal "oh no, is this code good enough" anxiety - it's more like a low hum in the background of every standup, every PR review, every deployment. At Fidel API, we had one of those goals that sounds simple in a meeting and then haunts you for months: build a Reimbursement API for the US market. Financial APIs in the US are a different beast entirely. Compliance is strict, uptime expectations are brutal, and your "move fast and break things" instincts need to be surgically removed. I spent the first few weeks doing something that doesn't look like engineering at all - sitting with the product team, mapping out the domain, arguing (politely, mostly) about what words like "transaction" and "settlement" actually mean in three different contexts until everyone was slightly annoyed but aligned. I leaned heavily into Domain-Driven Design during this phase. Not the kind where you draw diagrams nobody reads, but the kind where you force yourself to understand the business deeply enough that every abstraction in your code maps to something real. That work felt slow and frustrating at the time. In hindsight, it saved us from building the wrong thing and having to untangle it under deadline pressure, which would have been infinitely worse.
There was a stretch - maybe two or three weeks - where we were deploying critical features under very tight deadlines, and that's when the anxiety turned into something more acute. The kind of tight where if something goes wrong, a real customer sees it. Not "visible" in the abstract sense - visible in the "someone from Google's team would notice" sense. I ran load tests at weird hours. I simulated failure scenarios that felt paranoid but turned out to be exactly the kind of thing that happens at 3am on a Tuesday. I remember sitting in front of the monitoring dashboard during one particularly nerve-wracking deploy, watching metrics tick by, waiting for something to spike. Nothing did. We hit zero customer-impacting outages during those windows, and the feeling afterward wasn't celebration - it was relief. The kind where you close your laptop and stare at the wall for a minute.
The thing I actually took away from this project isn't about event-driven architectures or clean service boundaries, even though I learned a ton about both. The real lesson was about the gap between "building something that works" and "building the right thing at the right time and communicating clearly enough that everyone stays aligned." That gap is where most projects fail, and it has almost nothing to do with code quality. It's about whether the PM understands the engineering constraints, whether you understand the business urgency, whether everyone is honest about what's actually possible in the time available. I got much better at that kind of communication during this project, partly because I had to, and partly because watching what happens when communication breaks down on a high-stakes project is educational in a way that's hard to forget. Launching a product for an enterprise client, it turns out, is about 30% engineering and 70% making sure nothing is on fire at any given moment. Very glamorous. After the launch stabilized, I moved to the Web3 team to build a loyalty commerce experience using decentralized tech - a completely different universe, technically and culturally. But the core lesson transferred perfectly: understand the domain before you write a line of code, communicate relentlessly, and don't confuse being busy with making progress.