The greatest risk in building an MVP isn't building the wrong product. It's building the right product in a way that makes every subsequent decision more expensive, every feature slower to ship, and every team expansion more painful.
I've reviewed the codebases of dozens of early-stage companies. The pattern is consistent: talented engineers, moving fast, making decisions that are technically defensible in isolation but collectively create a system that's nearly impossible to evolve. By the time they realise this, they've already raised money and committed to growth.
Here are the seven technical decisions I see most often — and the ones that cost companies the most.
A word on context
Not all of these decisions are wrong in all contexts. Some are entirely appropriate for a specific stage or team. The problem is always the same: making these decisions by default, without thinking through the implications, rather than as a deliberate trade-off.
Mistake 1: Optimising the MVP for the Wrong Thing
The mandate of an MVP is to validate assumptions at minimal cost. Every technical decision should be evaluated against that mandate.
The most common failure: engineers optimise for technical elegance rather than validation speed. They spend three weeks building a perfect abstraction layer for a database they might replace. They implement a generic plugin architecture for a feature that has one use case. They argue about event sourcing vs. CRUD for a product that hasn't shipped yet.
What to do instead: Define what you're trying to validate before writing a line of code. List your top three assumptions about the product, the user, and the market. Then ask: "What's the minimum technical investment that lets us test these assumptions?" That's your MVP's technical brief.
The validation-first checklist
Before starting any technical task, ask:
- What assumption does this validate?
- What's the cheapest way to test the same assumption?
- What happens if this assumption is wrong — can we pivot without throwing this away?
Mistake 2: Starting With Microservices
This is the most predictable mistake I see from engineers who have worked at large tech companies. They've experienced the benefits of microservices at scale — independent deployment, technology flexibility, team autonomy — and they import the architecture to a team of three.
The problem: microservices require significant infrastructure investment (service mesh, distributed tracing, service discovery, per-service CI/CD pipelines) and they distribute what is fundamentally a local coordination problem across the network. The debugging experience degrades dramatically. Onboarding new engineers takes longer. Every feature that crosses service boundaries requires coordination.
For a team of fewer than 10 engineers, a well-structured monolith will almost always deliver features faster, be easier to understand, and be simpler to operate than a microservices architecture.
What to do instead: Start with a modular monolith. Structure the codebase in clear, well-bounded modules (payments, users, notifications, etc.) with explicit interfaces between them. This gives you most of the architectural clarity of microservices with none of the operational overhead. When a specific module actually needs to scale independently or be deployed separately — and you'll know when, because you'll have production data — extract it then.
The extraction timeline
Most early-stage products reach a genuine need to extract microservices around the 15–30 engineer mark, when team coordination overhead becomes the bottleneck. Until then, the complexity cost exceeds the benefit.
Mistake 3: No Environment Separation
"We'll add staging later" is a commitment that is never honoured. Later becomes months, then a year, then never — because by the time you've shipped and grown, the cost of adding environment separation has multiplied.
The consequence: developers test in production. A bad deploy brings down the live product. A database migration that should have been tested runs against production data. A customer sees a half-built feature.
What to do instead: Set up three environments from day one: development (local), staging (cloud, production-mirror), and production. Keep them as identical as possible. This costs a day of setup time; failing to do it costs days of incident investigation and customer trust.
Mistake 4: Hardcoded Configuration and Credentials
Credentials in source code. Database connection strings in .env files checked into git. API keys passed as hardcoded strings.
This is embarrassingly common, and the consequences are severe: one accidental public GitHub commit, one compromised developer laptop, or one disgruntled employee can expose every service the company depends on.
What to do instead: From day one, use a secrets manager. Azure Key Vault, AWS Secrets Manager, and HashiCorp Vault all have free tiers. For environment configuration, use environment variables injected at deployment time, never stored in the codebase. Audit your git history for accidentally committed credentials and rotate anything you find.
Mistake 5: No Observability
"We'll add monitoring later" is nearly as common as the environment problem — and nearly as damaging. The result: when something breaks in production (and it will), you have no idea what happened, how long it's been broken, or what's affecting it.
What to do instead: Implement the three fundamentals before your first production deployment:
- Structured logging — JSON-formatted logs with consistent fields (request ID, user ID, service name, timestamp). These are queryable; unstructured text logs are not.
- Error alerting — A Slack or email notification when your application throws an unhandled exception. This costs 30 minutes to set up and tells you immediately when something is broken.
- Uptime monitoring — An external health check (Azure Monitor, Better Uptime, or even UptimeRobot's free tier) that confirms your application is reachable. You should know your service is down before your customers do.
This foundation takes half a day to implement. The absence of it makes every production incident dramatically more expensive.
Mistake 6: No CI/CD From the Start
"We deploy manually from laptops" is a description I hear from companies about to raise a Series A. It means: deployments are inconsistent, error-prone, and depend on one person's local environment. One new hire joins, can't deploy, and the problem becomes critical.
What to do instead: Set up a basic CI/CD pipeline before your second engineer joins. For most MVPs, this is:
- GitHub Actions triggered on merge to
main - Run tests (even if there are only a few)
- Build and push Docker image
- Deploy to staging automatically
- Promote to production with one click (or automatically, if you have confidence in your tests)
This takes one day to set up properly and pays dividends for the entire life of the company.
Mistake 7: The Wrong Technology Bet
This is the most nuanced mistake — and the most expensive when it occurs. Choosing a technology because it's new and interesting, because the lead engineer knows it, or because a FAANG company published a blog post about it.
The right technology for an MVP is almost always boring, well-understood, and well-documented. Boring technologies have:
- Large communities with StackOverflow answers for every problem
- Mature ORMs, frameworks, and libraries for common use cases
- Engineers who know them, which makes hiring easier
- Years of production battle-testing
Novel technology choices have the opposite properties. They also compound: if you choose three novel technologies and each has a 20% chance of hitting a showstopper problem, you have a 49% chance of at least one showstopper.
The criteria for an MVP technology stack
Choose technologies where:
- You can hire engineers who know them without paying a premium
- The community is large enough that most problems have documented solutions
- The technology has a clear long-term support trajectory
- At least one team member is genuinely expert in it today
Innovation should be in your product, not your infrastructure.
The One Decision That Supersedes All Others
Every mistake on this list has the same root cause: optimising for short-term development speed at the cost of medium-term evolvability.
The best MVPs are built with enough care that they can be evolved — not enough care that they're perfect. The goal is not clean code; it's learning. But you need to be able to implement what you learn. If every new feature requires untangling the mess left by the last sprint, your iteration cycle slows, your team burns out, and your product stagnates.
Spend the extra day setting up environments properly. Spend the extra hour wiring up error alerting. Spend the extra sprint making the monolith modular. These investments compound for years.
MVP architecture and product strategy are two of my core service areas. If you're building your first product and want an experienced technical perspective before you commit to an approach, let's talk.