Should we open-source our internal deployment tool or keep it proprietary? 2 years of development, 40K lines of Go, used by 3 teams internally. Two competitors have similar open-source tools with growing communities.
Open-source the deployment tool under Apache 2.0 after a 90-day extraction phase that separates the core...
Decision
Open-source under Apache 2.0, but ONLY after a 90-day extraction phase separating the core orchestration engine (~25K lines) from internal-specific integrations (~15K lines of auth, service discovery, secrets hooks) via a gRPC-based plugin interface modeled on HashiCorp's go-plugin library. Hard go/no-go gate: run dependency analysis using `go list -m all` and import graph tooling (loov/goda). If >30% of packages have circular dependencies on internal-only modules, STOP — extraction cost will exceed 6 engineer-months and the project isn't architecturally ready. Do NOT publish the raw internal codebase under any circumstances. Post-release viability threshold: 100 GitHub stars and 10 external contributors within 6 months; miss both and reduce to passive maintenance. Allocate 0.5 FTE for community management (issue triage, contributor onboarding, security disclosure process). Key failure mode: extraction balloons to 180+ days due to deep coupling, burning engineering capacity with nothing shipped. Secondary failure: fewer than 50 external contributors at 12 months signals net-negative ROI.
Next actions
Council notes
Evidence boundary
Observed from your filing
- Should we open-source our internal deployment tool or keep it proprietary? 2 years of development, 40K lines of Go, used by 3 teams internally.
- Two competitors have similar open-source tools with growing communities.
Assumptions used for analysis
- The 40K-line Go codebase can be meaningfully separated into a ~25K core engine and ~15K internal integration layer without requiring a ground-up rewrite
- Apache 2.0 licensing is appropriate given two competitors already using permissive open-source licenses with growing communities
- 0.5 FTE is available and sufficient for first-year community management without starving the 3 internal teams of engineering support
- The deployment tool solves a sufficiently general problem that external contributors will emerge — it is not so niche that community never materializes
- The 3 internal teams can tolerate a 90-day period where the tool undergoes architectural refactoring while remaining functional
- existing stack defaulted: greenfield assumed (not_addressed)
Inferred candidate specifics
- Open-source under Apache 2.0, but ONLY after a 90-day extraction phase separating the core orchestration engine (~25K lines) from internal-specific integrations (~15K lines of auth, service discovery, secrets hooks) via a gRPC-based plugin interface modeled on HashiCorp's go-plugin library. Hard go/no-go gate: run dependency analysis using `go list -m all` and import graph tooling (loov/goda). If >30% of packages have circular dependencies on internal-only modules, STOP — extraction cost will exceed 6 engineer-months and the project isn't architecturally ready. Do NOT publish the raw internal codebase under any circumstances. Post-release viability threshold: 100 GitHub stars and 10 external contributors within 6 months; miss both and reduce to passive maintenance. Allocate 0.5 FTE for community management (issue triage, contributor onboarding, security disclosure process). Key failure mode: extraction balloons to 180+ days due to deep coupling, burning engineering capacity with nothing shipped. Secondary failure: fewer than 50 external contributors at 12 months signals net-negative ROI.
- Run `go list -m all` and loov/goda import graph analysis on the 40K-line codebase to produce a dependency map showing which packages have circular dependencies on internal-only modules, and calculate the percentage against the 30% go/no-go threshold.
- b003 had the highest confidence (0.88), survived three rounds of adversarial review with strengthening actions from multiple models in Rounds 1-3, and provided the most specific actionable framework including named tools, quantified thresholds, and explicit failure modes. No surviving branch had higher confidence.
- Run dependency analysis using `go list -m all` and loov/goda to produce import graph and calculate circular dependency percentage against the 30% threshold
- Design gRPC-based plugin interface boundary between core engine (~25K lines) and internal integrations (~15K lines) using HashiCorp go-plugin as reference architecture
- Based on dependency analysis results, make go/no-go decision on 90-day extraction phase — abort if >30% circular dependencies on internal modules
- Set up community infrastructure: GitHub repository, CONTRIBUTING.md, security disclosure process, issue templates, and contributor onboarding documentation
- Track GitHub stars, external contributors, and community PRs against 100-star/10-contributor 6-month threshold; trigger passive-maintenance fallback if both missed
Unknowns blocking a firmer verdict
- Apache 2.0 vs. BSL licensing tradeoff: b004 and b006 raised valid concerns about hyperscaler commoditization risk that b003 does not address. If the tool gains significant traction, the Apache 2.0 choice is irreversible and may require a HashiCorp-style re-licensing controversy later.
- The 30% circular dependency threshold and 6 engineer-month cost ceiling are synthetic estimates — no named benchmark or study supports these specific numbers. Actual extraction complexity could be significantly different.
- Whether 0.5 FTE is sufficient for community management is untested — comparable projects (e.g., early-stage CNCF tools) often require more investment to reach critical mass.
- Competitor response is unmodeled: two competitors with growing communities may accelerate feature development once they see a new entrant, potentially negating the community catch-up strategy.