Micro-Frontend Migration Blueprint: Module Federation, Single-SPA, and When Neither Fits (2026)

Execution-focused blueprint for migrating a monolithic frontend to a micro-frontend architecture, covering Module Federation, Single-SPA, and route-based split patterns, with 30/60/90 delivery phases, governance controls, and rollback criteria.

M
Published

TL;DR for Engineering Leaders

Micro-frontend architectures solve a team-coordination problem, not a performance problem. The pattern is the right answer when four or more product teams need to deploy independently to a shared user-facing surface with divergent release cadences. It is the wrong answer when the goal is replacing a monolith with a single modern framework, when team count is below that threshold, or when the surface's performance profile will not tolerate an additional runtime layer. This blueprint covers the four architecture patterns visible in production (Module Federation, Single-SPA, route-based split, iframe isolation), a structured pattern-selection decision tree, a reference architecture for Module Federation as the most common choice, a 30-60-90 day delivery plan with explicit deliverables per phase, the five production failure modes with detection and rollback paths, and the acceptance criteria that distinguish a successful migration from one that should be reverted.

  • Decide on a pattern only after the capability and team-structure audit confirms micro-frontends are the right response.
  • Start with route-based split before adopting runtime composition; it solves 60 percent of the coordination problem at 10 percent of the operational cost.
  • Treat shared dependencies (React, design system, state library) as a contract with version pinning, not a convention.
  • Include rollback criteria and an exit ramp in the migration charter; do not treat the pattern as one-way.
  • Observability and runtime health checks are part of phase one, not phase three.

Key Takeaways

  1. Micro-frontend architecture is a response to organizational scaling, not a performance pattern. Teams with fewer than four product squads sharing a surface will pay operational cost for no observable benefit.
  2. Module Federation is the strongest fit for organizations with a shared framework and a need for runtime composition, while Single-SPA is the stronger fit for multi-framework composition during a staged migration away from a legacy stack.
  3. Route-based split at an edge router is the first-best choice in the majority of cases because it solves deployment independence without introducing runtime coupling between applications.
  4. Version-skew incidents between host and remote loading mismatched React or Vue versions are the single most common production failure mode in Module Federation, and prevention requires a shared dependencies contract enforced at build and at runtime.
  5. A migration charter should include rollback criteria and an exit ramp from the start. Treating the migration as one-way leads to operational investment that the benefit does not justify when the pattern does not fit.
  6. Deploy independence is the primary benefit to measure; runtime performance and bundle size are secondary. Migrations evaluated on the wrong metric produce the wrong conclusion about whether to continue or reverse.

Problem Definition

The problem this blueprint addresses is organizational coordination cost on a shared user-facing surface, not technical architecture preference. A large web application owned by a single team rarely needs micro-frontend patterns. A large web application with four or more product teams (for example, a suite of connected tools where billing, analytics, administration, and core product each have distinct owners) experiences coordination cost that scales with team count and release cadence variance. Release trains slow to the pace of the slowest team, cross-team regressions consume incident capacity, and the monorepo governance layer grows heavier than the product work it supports. This is a real cost, and micro-frontend patterns exist to address it.

Where the problem is not present, the pattern introduces cost without benefit. Organizations with a single product team, or with multiple teams that already ship at a similar cadence through a well-governed monorepo, should treat micro-frontends as the wrong problem. The operational cost of runtime composition (shared dependencies contract, version pinning, cross-origin observability, runtime fallback) is a persistent tax that only pays back above a specific team-count threshold. The failure mode of misapplied migrations is a team that spends 9 to 12 months introducing runtime composition and ends up with worse performance, slower deploys, and higher incident rates than the monorepo it replaced. This blueprint is written to prevent that outcome as much as to support the successful migration.

Methodology Snapshot

Blueprint guidance is grounded in observable delivery patterns from published engineering-blog disclosures, open-source reference implementations, and production case histories from ThoughtWorks Technology Radar assessments. Architecture recommendations are designed for adaptation, not one-size-fits-all execution. Every phase includes rollback criteria, and the blueprint is refreshed on a 90-day cycle. For full methodology, see evaluation methodology.

When Micro-Frontends Are the Right Answer

A micro-frontend architecture is the right answer when specific conditions are jointly present. It is an organizational pattern before it is a technical one, and the conditions are about team structure and release cadence rather than about framework capability. Unlike a monorepo with package boundaries (which shares build-time coordination), a micro-frontend architecture decouples at deployment boundaries and sometimes at runtime. Compared with a fully separate per-product application (which decouples at the URL boundary), a micro-frontend architecture preserves a unified user experience across team-owned surfaces. The cautionary read is that the conditions below must all hold; selecting on a subset produces outcomes that do not repay the operational cost.

  • Four or more product teams deploy to a shared user-facing surface and have distinct owners and roadmaps.
  • Release cadence variance across teams exceeds a factor of three (for example, one team ships daily while another ships every two weeks).
  • Coordination cost is visible as a throughput constraint, typically measured as release-train delay or blocked pull request age.
  • A shared design system and shared primary framework exist or can be established (reduces cross-framework composition cost).
  • The platform team has the operational maturity to run a runtime composition layer (observability, runtime health checks, shared dependencies governance).

When Micro-Frontends Are the Wrong Answer

The pattern is the wrong answer when any of several disqualifying conditions are present. These are not preferences to trade off; they are reasons the operational cost will exceed the coordination benefit within a 12 to 18 month horizon.

  • Team count below four on the shared surface. The coordination cost at smaller scale is lower than the operational cost of composition.
  • The primary motivation is performance improvement. Runtime composition adds overhead; it does not reduce it.
  • The motivation is replacing a monolith with a modern framework. Do the framework migration as a staged in-place migration, not as a split.
  • The platform team lacks server-side or edge-routing operational experience. The pattern will surface this gap during the first production incident.
  • The product organization has a single shared design system owner and tight coordination on UI patterns. The benefit shrinks; the cost remains.

Architecture Pattern Selection

Four architecture patterns are visible in production at enterprise scale. They are not equally appropriate; the selection follows from the specific coordination problem being solved.

Module Federation (Webpack 5 and Rspack)

Module Federation is a runtime composition pattern introduced in Webpack 5 and now also supported by Rspack and several other bundlers. It allows a host application to load modules from a remote application at runtime, with a shared dependencies contract that lets multiple remotes use a single copy of React, the design system, or other shared libraries. The mechanics are: each remote exposes a manifest of entry modules, the host declares which remotes it consumes, the bundler writes a runtime that resolves remote modules on demand, and the shared dependencies layer coordinates versions using semver rules or strict equality.

The strengths of Module Federation are true runtime composition (a remote can ship without the host rebuilding), a well-developed shared dependencies model, strong alignment with React and Vue ecosystems, and a large community with production case histories. The operational cost is the shared dependencies contract (which is the surface where most production incidents occur), the observability requirement for cross-origin debugging (source maps must work across remotes), and the failure-handling requirement for remote load failures at runtime.

Single-SPA

Single-SPA is a top-level router that loads micro-frontends as applications, each of which can be built in a different framework. The mechanics are: a root-config application registers each micro-frontend and declares when it should be active, each micro-frontend exports lifecycle functions (bootstrap, mount, unmount), and the root-config coordinates mounting and unmounting as the user navigates.

The strength of Single-SPA is multi-framework composition. Organizations migrating from AngularJS to React, or running a React and Vue combined surface during a transition, can use Single-SPA to compose both in a single user-facing surface. The operational cost is that shared dependencies are harder (each framework brings its runtime), cross-framework state coordination is non-trivial, and the pattern is slower than Module Federation for single-framework work. Single-SPA is the stronger choice specifically when multi-framework composition is the binding requirement.

Route-Based Split (Edge Router or Reverse Proxy)

The simplest composition pattern is not runtime composition at all. It is route-based split at an edge router (a CDN worker, a reverse proxy, or a path-based URL split) where distinct applications own distinct URL paths. The mechanics are: a single hostname with an edge router that dispatches requests by path to distinct application origins, a shared authentication layer (cookie or token) that works across origins, and a shared navigation component (either a thin loader or a server-included header) that provides visual continuity.

The strength of route-based split is simplicity. There is no runtime coupling, no shared dependencies contract, no cross-origin observability complication, and deploy independence is true independence. The cost is that the user experience is composed at page-transition boundaries, not within a single page. If the user-experience requirement tolerates page transitions between sections, this is the first-best choice. StackAuthority's analysis of published micro-frontend case studies suggests that a material share of teams that adopted Module Federation in 2023 and 2024 would have been better served by route-based split; the symptom is teams that paid the Module Federation operational cost and did not use the within-page composition capability it provides.

iframe Isolation

iframe isolation loads each micro-frontend as an iframe on the parent page. The mechanics are: the parent page provides a layout, each region is an iframe pointed at a distinct application origin, and cross-frame communication uses postMessage. The strengths are strong security and runtime isolation (a buggy iframe cannot crash the parent). The costs are substantial: no shared dependencies (each iframe brings its full framework runtime), poor performance, hard-to-coordinate navigation and focus, and accessibility complications. iframe isolation is appropriate as a fallback when a specific subsection cannot be trusted (third-party widgets, legacy sandboxed components) but is not a primary micro-frontend strategy.

Four Patterns Side by Side

DimensionModule FederationSingle-SPARoute-Based Splitiframe Isolation
Runtime couplingHigh (shared runtime)Medium (shared router, separate runtimes)None (distinct origins)None (distinct origins, sandboxed)
Framework constraintSingle framework strongly preferredMulti-framework supportedAny; one per routeAny; one per iframe
Deploy independenceTrueTrueTrueTrue
Version-skew riskHigh (primary failure mode)MediumNoneNone
Cross-origin observabilityNeeded (source maps across remotes)NeededNot neededNot needed
Within-page compositionYesYesNoYes (but with cost)
Typical onboarding time8 to 14 weeks6 to 10 weeks2 to 4 weeks1 to 2 weeks
Recommended when4+ teams, shared framework, within-page composition requiredMulti-framework migrationPage-boundary composition toleratedSecurity or sandbox requirement
Primary failure modeVersion skewFramework interaction bugsNone beyond standard webPerformance, a11y coordination

Pattern Selection Decision Tree

Answer in order. The first answer that matches a binding constraint dictates the pattern.

  1. Does the composition need to happen within a single page, with shared state or tightly-coordinated navigation across regions?
    • No: Route-Based Split is the first-best choice.
    • Yes: continue.
  2. Are there multiple primary frameworks on the surface today (for example, React and Vue, or AngularJS and React) that need to compose during a migration?
    • Yes: Single-SPA.
    • No: continue.
  3. Is there a security or sandbox requirement for a specific subsection that the platform team cannot guarantee otherwise?
    • Yes: use iframe Isolation for that subsection only, not as the primary pattern.
    • No: continue.
  4. Does the platform team have the operational maturity for runtime composition (observability, shared dependencies governance, runtime fallback)?
    • Yes: Module Federation.
    • No: fall back to Route-Based Split and revisit after platform maturity is established.

Reference Architecture for Module Federation Migration

The reference architecture described below is for Module Federation as the most common composition pattern at enterprise scale. The architecture has five layers: host shell, remote applications, shared dependencies contract, deploy pipeline, and runtime operations layer.

Host shell. The host shell is a small application that owns the shared top-level navigation, authentication, and the runtime that mounts remote applications into designated regions. It is deliberately thin; product logic lives in remotes. The shell owns the manifest of which remotes are active for a given user segment (typically via a configuration service such as LaunchDarkly or a built-in flag system), the fallback behavior when a remote fails to load, and the shared observability instrumentation that attributes errors to the correct remote.

Remote applications. Each remote is an independently-deployed application that exposes a manifest of entry modules through the bundler's Module Federation configuration. Remotes should not call each other directly; cross-remote communication goes through URL state, shared event bus published by the shell, or explicitly-shared modules declared in the contract. Each remote ships its own observability identifier, version manifest, and health check endpoint.

Shared dependencies contract. The contract declares which libraries are shared across host and remotes (React, React DOM, the design system, the state library, the authentication client, the analytics client), which version policy applies to each (singleton with strict equality, singleton with semver matching, or independent-per-remote), and what the resolution order is when mismatched versions are detected. This contract is the single most important operational artifact; the majority of production incidents in Module Federation deployments originate here.

Deploy pipeline. Each remote has an independent deploy pipeline that publishes the bundled manifest and asset files to a CDN origin. The host shell's runtime manifest is updated separately (typically through a configuration service or a small deploy of the shell itself) to point at the new remote version. This structure is what provides deploy independence; remotes can ship without the shell rebuilding.

Runtime operations layer. The operations layer includes cross-origin observability (error aggregation, attribution to the correct remote version), runtime health checks (the shell tests remote availability before mounting), fallback behavior when a remote fails to load (skeleton UI, cached version, or full fallback page), and rollback mechanics (the configuration service can roll a remote back to a prior version without redeploying the shell).

30-60-90 Day Delivery Plan

Days 1 to 30: Foundation

Deliverables in the first 30 days are prerequisites for runtime composition; the migration itself does not begin here. The goals are to establish the platform foundation and prove deploy independence on a small surface before committing to the full migration.

  • Shared dependencies contract defined and reviewed by each team. Owner: platform engineering.
  • Host shell application scaffolded with authentication, navigation, and a configuration service integration. Owner: platform engineering.
  • One pilot remote carved out from the monolith. This is the smallest meaningful surface that can be deployed as a remote (a settings page, a non-critical feature). Owner: the team that owns that surface.
  • Deploy pipelines for host and pilot remote established with independent release paths. Owner: platform engineering with the pilot team.
  • Runtime observability configured: error aggregation across origins, source map correlation, release-version attribution. Owner: platform engineering.
  • Runtime fallback behavior defined for the pilot remote: what does the user see if the remote fails to load? Owner: product owner for that surface.
  • Go-live criteria for the pilot remote: independent deploy demonstrated, observability captures a release event end-to-end, runtime fallback tested, rollback tested.

Days 31 to 60: First Remote in Production

The second phase takes the pilot remote to production and validates the operational model at real traffic before committing additional teams to the pattern.

  • Pilot remote launched to a small traffic segment (5 to 10 percent) behind a feature flag. Owner: platform and product team.
  • Cross-origin observability produces clean data for the pilot segment (no lost events, version attribution works, errors are actionable). Owner: platform.
  • Release cadence test: the pilot team performs three independent deploys over two weeks without the host shell rebuilding. Owner: platform.
  • Version-skew drill: introduce an intentionally mismatched React version in the pilot remote in a staging environment and verify the shared dependencies contract resolves or alerts correctly. Owner: platform.
  • Documentation complete: shell-remote contract, deploy pipeline setup, incident response playbook for remote load failures. Owner: platform.
  • Go-live criteria for phase three: pilot remote in production at 100 percent traffic for two consecutive weeks with no remote-composition incidents that exceed the production SLO for the surface.

Days 61 to 90: Second and Third Remotes, Ownership Transfer

The third phase adds additional remotes and transitions operational ownership patterns from platform-led to federated with platform guardrails.

  • Two additional remotes carved out, following the same pilot pattern. Each new remote passes the same go-live criteria as the pilot. Owner: platform and each product team.
  • Shared-dependencies version governance operating as a standing process: the platform team reviews dependency upgrade proposals weekly, coordinated upgrades executed monthly. Owner: platform.
  • Product teams trained on the runtime composition model: what they can change independently, what requires shell coordination, how to handle cross-remote dependencies. Owner: platform.
  • Runbook and on-call model in place: which errors the platform team owns, which remote teams own, escalation path for cross-remote incidents. Owner: platform with each team.
  • Exit-ramp review: has the migration delivered the deploy-independence benefit the charter predicted, or should phase four be paused and reconsidered?

Operational Controls and Governance

Ownership of the operational layer is split between the platform team (which owns the composition infrastructure) and the product teams (which own the application logic inside each remote). The contract between them is the shared dependencies agreement and the release governance process. Unlike a conventional single-application governance model (where deploy governance is centralized), micro-frontend governance distributes release decisions while centralizing infrastructure decisions.

The shared dependencies contract should specify: libraries shared across host and remotes (with minimum version, maximum tolerated drift, and singleton policy), libraries shared across remotes but not the host (design system, utility libraries), and libraries explicitly not shared (where per-remote versions are acceptable because the libraries have no shared state). The contract should be enforced at two points: at build time (the bundler's shared dependencies configuration fails the build if the contract is violated) and at runtime (the shell logs a warning or prevents mount when a version mismatch is detected).

Release governance should require: a remote cannot ship a major version of a shared dependency without platform team review, coordinated shared-dependency upgrades happen on a scheduled cadence (monthly is typical) with each remote tested against the new version before the upgrade ships, and the platform team owns the shell's release schedule and coordinates shell updates with any breaking-interface changes.

Runbook and Ownership Checklist

Control AreaPrimary OwnerSecondary / Escalation
Host shell codebasePlatform engineeringSRE
Shared dependencies contractPlatform engineeringAll product teams
Runtime observabilityPlatform engineeringSRE
Remote code and deploysEach product teamPlatform engineering
Runtime fallback UIProduct owner for affected remotePlatform engineering
Cross-origin incident triagePlatform on-callProduct team on-call
Rollback executionPlatform on-callTeam that owns the failing remote
Release cadence governancePlatform engineering (shared)Each team (local)
Accessibility complianceEach product teamDesign system team
Performance budget enforcementPlatform engineering (CI)Each product team

Explicit change gates for any shared dependency upgrade, and explicit rollback criteria for any new remote's go-live.

Common Failure Modes

Failure Mode 1: Version Skew Between Host and Remote

What it looks like. A remote loads in production and produces a runtime error that does not occur in the remote's standalone environment. Typical signature: React hook rule violations, duplicate React instances, or context provider mismatches.

Why it happens. The host's shared dependency policy was not enforced at the remote's build time, or a semver policy allowed drift that was not compatible at runtime (for example, React 18 and React 19 do not compose as singletons).

Detection. Production error rate spike attributable to a specific remote version. Error messages referencing React internals are a strong signal.

Rollback. The configuration service rolls the affected remote back to the prior version. The incident is not fixable forward without first resolving the version mismatch; do not attempt to ship a fix in the failing remote.

Failure Mode 2: Remote Load Failure at Runtime

What it looks like. The host shell attempts to load a remote and fails (network error, CDN outage, asset mismatch). The user sees either a broken region or a fallback UI depending on the shell's configuration.

Why it happens. CDN propagation lag after a deploy, asset manifest mismatch between manifests the host cached and files available on the CDN, or network failures from the user's region.

Detection. Shell-reported remote load failure events in observability, increased fallback UI exposure.

Rollback. If the cause is a bad deploy, roll the remote back. If the cause is CDN or network, verify the fallback UI is being served and escalate to infrastructure. The shell should always have a defined fallback; if it does not, this failure mode is worse than it needs to be.

Failure Mode 3: Cross-Remote Observability Gap

What it looks like. A user-reported issue cannot be reconstructed because the error data is split across multiple observability tenants or is missing source maps.

Why it happens. Each remote was configured to ship errors to its own observability project without cross-correlation, or source maps were not uploaded to a common location.

Detection. Incident response takes longer than the surface's SLO because data is scattered.

Recovery. This is a phase-one problem that was deferred. Fix by centralizing error aggregation and ensuring every remote's source maps are available to the same error aggregation service.

Failure Mode 4: Shared Dependencies Drift Between Teams

What it looks like. Three remotes are on React 18.2, one is on React 18.3. A shared context provider stops working for that remote.

Why it happens. No coordinated shared-dependency upgrade process. Each team upgraded React on their own schedule.

Detection. A remote that was working stops working after a shell or other-remote update.

Recovery. Coordinated rollback of the drifted remote's React version, and a standing monthly cadence for shared-dependency upgrades going forward.

Failure Mode 5: Unplanned Coupling Through State

What it looks like. Two remotes have begun depending on a shared global state library instance that was not declared in the shared dependencies contract. A change in one remote breaks the other.

Why it happens. A developer in one remote discovered the other remote's state and reached into it, treating the composition model as a shared memory model.

Detection. Cross-remote breakage after a change that should have been isolated.

Recovery. Add the state library to the shared dependencies contract, formalize the dependency, or (preferably) refactor the dependency out by using URL state or explicit event-bus communication. This failure mode is cultural as much as technical; the platform team should review runtime state sharing during code review.

Metrics and Acceptance Criteria

Define acceptance metrics before migration start and track them through phase three and beyond.

  • Deploy independence: number of remote deploys per week that did not require the shell to redeploy. Target: 95 percent or higher after phase three.
  • Cross-remote incident rate: count of production incidents caused by remote-composition (version skew, load failure, state coupling) per month. Target: under 2 per month steady-state.
  • Initial page load LCP (Largest Contentful Paint): field-measured LCP at the shell's home route. Target: regression of under 10 percent versus pre-migration baseline, return to baseline by phase three.
  • Chunk load failure rate: percentage of remote-load attempts that fail in production. Target: under 0.5 percent.
  • Time to rollback a remote: time from rollback decision to traffic fully served by prior version. Target: under 10 minutes.
  • Shared-dependency upgrade lead time: time from platform team announcement of an upgrade to the last remote adopting. Target: under 4 weeks.

A migration that has held these metrics for two consecutive quarters has validated the pattern. A migration that is missing two or more of these metrics at the 90-day mark should trigger an exit-ramp review.

Scenario: Tessera Cloud Migrates a 4-Year-Old Next.js Monolith

Tessera Cloud, a fictional multi-tenant SaaS platform with roughly 260 engineers, went into 2025 with a four-year-old Next.js application that had grown to 11 product surfaces owned by 6 distinct teams. Release trains had grown to a fortnightly cadence, and two of the six teams were consistently delayed, causing all other teams to hold changes. Coordination cost was visible in pull-request-age metrics, which had grown from a two-day median in 2022 to a nine-day median in 2024.

The platform team's initial instinct was Module Federation as the highest-visibility option. A capability audit surfaced that the platform team did not have meaningful edge-routing or runtime-composition experience, and the shared-dependencies contract would require governance maturity that was not yet in place. The audit recommended a phased approach: route-based split as the first step, with Module Federation as a later option for within-page composition if the route-split did not solve enough of the coordination problem.

Phase one (route-based split) was completed in 11 weeks. The edge router was configured to dispatch three team-owned URL paths to independently-deployed applications, with a shared navigation header served from the shell and authentication via a shared cookie. Deploy independence was achieved immediately; the three teams began shipping at their native cadences within three weeks of cutover. Pull-request-age median dropped from nine days to three days over the following quarter.

Phase two (Module Federation for specific within-page regions) was evaluated at month 6. Two of the three product surfaces had clear within-page coordination needs (a shared navigation with team-owned sub-navigation; a dashboard with team-owned widgets). Module Federation was adopted for those two surfaces; the other nine stayed on route-based split. The full migration was complete by month 14, with four teams on Module Federation and two teams on route-based split. The pattern selection was not uniform because the coordination problem was not uniform; a single-pattern migration would have paid Module Federation's cost on surfaces that did not need it.

The failure mode the platform team avoided was adopting Module Federation uniformly before building the platform maturity to operate it. The second failure mode avoided was treating the migration as one-way; two teams ran for three quarters on route-based split and had no reason to adopt runtime composition, which would have been a cost without benefit for their work.

Common Misconceptions About Micro-Frontends

"Micro-frontends make the application faster." The pattern does not make anything faster. Runtime composition adds overhead. Route-based split is approximately neutral. Performance wins from micro-frontend adoption are almost always attributable to the refactoring work that accompanied the migration, not to the pattern itself. Selecting the pattern for performance reasons will disappoint.

"Module Federation is the default micro-frontend pattern." It is the highest-visibility pattern because Webpack 5 introduced it with strong marketing, but it is not the default. Route-based split is the default when it meets the coordination requirement, and it meets the requirement more often than Module Federation marketing would suggest. Treat Module Federation as a specific-need tool, not a first choice.

"We need micro-frontends because our team is large." Team size alone is not the signal. The signal is team count on a shared surface with divergent release cadence. A 200-engineer frontend organization distributed across 15 products with distinct URL owners does not need micro-frontends. A 40-engineer frontend organization on one shared surface with four product squads might.

"Micro-frontend migrations are one-way." They are not. An exit ramp should be part of the charter. Organizations that treat the pattern as irreversible invest more in composition infrastructure than the benefit justifies when the pattern turns out to be wrong.

"Each remote can use any framework it wants." Technically true with Single-SPA, costly with Module Federation, and almost never correct as an operating policy. Framework diversity multiplies shared-dependency cost and cross-team integration complexity. The pattern is about deploy and team independence, not framework independence.

Rollback Criteria and Exit Ramp

The migration should have explicit rollback criteria that trigger a reconsolidation review rather than more investment. Write these before starting the migration and commit them in the charter.

  • Deploy independence is not materially better at month 6 than at month 1. Evidence: pull-request-age or release-train delay metrics unchanged.
  • Cross-remote incident rate exceeds 4 per month at steady state and is not declining.
  • Initial page load LCP has not returned to baseline by month 6 despite performance engineering investment.
  • A shared dependency upgrade (React major version, design system major version) takes more than 6 weeks to propagate across remotes, twice in a row.
  • Team-level feedback across all participating teams reports that the composition model is creating more coordination cost than it removed.

If two or more criteria are breached, run an exit-ramp review. The review considers three paths: continue with targeted fixes, consolidate back to a single application (the exit ramp), or adopt a different composition pattern (typically route-based split from Module Federation). The exit ramp is a legitimate outcome; it is not a failure. Organizations that reserve the right to exit are the organizations that make the best decisions about when to continue.

Limitations

This blueprint addresses micro-frontend migration patterns at enterprise scale. It does not cover component library migration, headless-CMS integration patterns, or single-framework architecture refactoring. Bundler and framework versions referenced are stable as of early 2026 and will be revised on the 90-day cycle. Reference architecture is adaptable to bundlers other than Webpack 5 (Rspack, Rsbuild, Vite with plugins), and the decision tree should be revisited when a specific stack's Module Federation support is immature.

Related Reading

References

About the Author

Mira Voss is a Research Analyst at StackAuthority with 11 years of experience in platform architecture strategy and engineering decision support. She earned an MBA from the University of Chicago Booth School of Business and covers category-level tradeoffs across platform investments, operating models, and governance design. Her off-hours are split between urban sketching sessions and weekend sourdough baking.

Reviewed by: StackAuthority Editorial Team Review cadence: Quarterly (90-day refresh cycle)

About Mira Voss

Mira Voss is a Research Analyst at StackAuthority with 11 years of experience in platform architecture strategy and engineering decision support. She earned an MBA from the University of Chicago Booth School of Business and covers category-level tradeoffs across platform investments, operating models, and governance design. Her off-hours are split between urban sketching sessions and weekend sourdough baking.

Education: MBA, University of Chicago Booth School of Business

Experience: 11 years

Domain: platform architecture strategy and cloud cost governance

Hobbies: urban sketching and weekend sourdough baking

Read full author profile