Readiness Scoring
"Production-ready" means different things in different contexts. Cloud-ready means something specific. Library-ready means something different. Service mesh-ready means something else entirely.
MeshOS gives you three independent, objective scores — one for each context. Each score comes with the specific factors that drove it up or down, and a prioritized list of what to fix. No guessing, no subjective judgment. Just a clear picture of where each application stands and what it needs.
Cloud Readiness
Before you deploy to AWS, Azure, or GCP, you need to know your application is actually ready for the cloud — not just that it runs in a container. Cloud Readiness measures this against real criteria: the 12-Factor App methodology as a baseline, extended with container-specific and cloud-native requirements.
Score factors and weights:
| Factor | Weight | What's Checked | |--------|--------|---------------| | Containerization | 20% | Dockerfile present, multi-stage build, non-root user | | 12-Factor compliance | 20% | Config from env vars, stateless processes, port binding | | Observability | 15% | Health endpoints, structured logging, metrics exposure | | Statelessness | 15% | No in-memory session state, external state stores | | Infrastructure as Code | 10% | Terraform, CloudFormation, or Pulumi configs present | | Dependency management | 10% | Pinned versions, lock files present | | Secret management | 10% | No hardcoded secrets, env-based config |
Common score killers:
- Hardcoded configuration values → move to environment variables
- Session state in application memory → use Redis or equivalent
- Missing health check endpoint (
/healthor/healthz) - Dockerfile running as root
- Missing lock file (
package-lock.json,poetry.lock, etc.)
When a score comes back low, the report tells you exactly why: which factor dragged it down, which specific file or line caused the deduction, and what change would fix it. A score of 74 isn't frustrating — it's a roadmap.
Library Readiness
Promoting code to a shared library is a permanent decision with organization-wide consequences. Library code must work for teams that didn't write it, can't easily ask questions about it, and need it to stay stable as their own applications evolve.
Library Readiness applies a higher bar than Cloud Readiness. It measures whether your code is actually fit for this responsibility.
Score factors and weights:
| Factor | Weight | What's Checked |
|--------|--------|---------------|
| API surface clarity | 25% | Clear public API, minimal surface area, consistent naming |
| Test coverage | 20% | Unit test coverage on public API, edge cases covered |
| Documentation | 20% | JSDoc/TSDoc on all public functions, README with usage examples |
| Type safety | 15% | TypeScript types exported, no any, strict mode enabled |
| Dependency minimalism | 10% | Few dependencies, no heavy transitive deps |
| Semantic versioning | 10% | CHANGELOG, version bumps on breaking changes |
Common score killers:
- Missing TypeScript definitions
- No README with installation and usage instructions
- Public functions without JSDoc comments
- Missing unit tests on public API
- Peer dependencies listed as regular dependencies
Service Mesh Readiness
Service mesh deployments make specific demands on applications. Sidecar proxies intercept traffic, health probes control availability, and distributed tracing requires instrumentation. An application that works perfectly outside a mesh can behave unexpectedly inside one.
Service Mesh Readiness evaluates compatibility with modern mesh environments (Istio, Linkerd, Consul Connect) before deployment, not after.
Score factors and weights:
| Factor | Weight | What's Checked | |--------|--------|---------------| | Health probes | 20% | Liveness and readiness endpoints implemented | | Graceful shutdown | 20% | SIGTERM handler, in-flight request completion | | Circuit breakers | 15% | Fallback patterns for downstream failures | | Distributed tracing | 15% | OpenTelemetry or Jaeger instrumentation | | mTLS compatibility | 10% | No hardcoded TLS config that conflicts with mesh | | Retry idempotency | 10% | Requests safe to retry (idempotency keys) | | Header propagation | 10% | Trace headers forwarded across service calls |
Common score killers:
- No graceful shutdown handler (process exits mid-request)
- Downstream calls without timeout or circuit breaker
- Missing trace context propagation
- Hardcoded TLS certificates that conflict with mesh-injected certs
Score Interpretation
| Range | Grade | Meaning | |-------|-------|---------| | 90–100 | A | Production-ready, exemplary | | 80–89 | B | Ready with minor improvements recommended | | 70–79 | C | Deployable but requires targeted fixes | | 60–69 | D | Significant gaps, needs work before promotion | | 0–59 | F | Not ready, substantial remediation required |
Scores are recalculated on every new version submission. Track improvement over time with full score history per version.
Score History
Scores are stored per version, so you can chart an application's progress over time. The moment you fix the hardcoded database URL and resubmit, the score updates and the history records the improvement. Teams use this to set measurable improvement targets and hold each other accountable.
Approval workflow
Scores feed directly into the approval workflow. A reviewer sees all three readiness scores alongside the findings that drove them, then approves or rejects each dimension independently.
An application can be approved for cloud deployment but not yet approved for library promotion. Approvals are recorded with the reviewer's name, timestamp, and any notes — giving you a complete, auditable trail of what was approved by whom.
Policy thresholds
Set minimum readiness scores required before approval is possible. When thresholds are active, the system physically prevents approval if scores don't meet the minimum — not a warning, a hard block.
This is how engineering standards actually get enforced. Not through guidelines that teams can choose to ignore, but through a system that makes non-compliant approvals impossible.