Blog/Behind the Code/Building a Micro-Frontend Architecture for Enterprise Aviation
POST
March 28, 2025
LAST UPDATEDMarch 28, 2025

Building a Micro-Frontend Architecture for Enterprise Aviation

How we designed and implemented a micro-frontend architecture for an enterprise aviation platform, enabling independent team deployments and reducing release cycles from weeks to hours.

Tags

Micro-FrontendsEnterpriseArchitectureReact
Building a Micro-Frontend Architecture for Enterprise Aviation
9 min read

Building a Micro-Frontend Architecture for Enterprise Aviation

TL;DR

Micro-frontends with Module Federation enabled four independent teams to deploy their aviation platform modules on separate release cycles, cutting deployment time from two weeks to under two hours. We built a shell application that dynamically loaded remote modules at runtime, shared a design system and authentication context across all micro-frontends, and established contracts that let five teams ship without stepping on each other.

The Challenge

Universal Weather's aviation platform had grown into a monolithic React application over several years. The frontend served flight planning, crew scheduling, weather briefings, fuel management, and customer portal modules — all bundled into a single deployable artifact. Five teams worked on this codebase, and the pain was real.

A single change in the crew scheduling module meant the entire application went through regression testing. Release trains ran on two-week cycles, and teams spent more time coordinating merges than writing features. A breaking CSS change in the weather module once took down the fuel management dashboard for eight hours before anyone noticed.

The business was clear about what they needed: teams should deploy independently, failures should be isolated, and the end user should still experience a seamless single application. We evaluated several approaches — iframe-based composition, build-time package integration, and runtime module federation — before landing on Webpack 5 Module Federation as the foundation.

The constraints made this more interesting than a greenfield micro-frontend project. We had an existing application with shared state, a custom design system, and authentication flows that touched every module. We couldn't do a big-bang rewrite. We needed an incremental migration path that let teams peel off from the monolith one module at a time.

The Architecture

The Shell Application

The shell (or "host") application owned the layout chrome: the top navigation bar, the sidebar, authentication context, and global error boundaries. It was deliberately thin — around 2,000 lines of code — and its job was to bootstrap the application, load remote modules, and provide shared context.

tsx
// shell/src/App.tsx
import React, { Suspense, lazy } from 'react';
import { ErrorBoundary } from './components/ErrorBoundary';
import { AuthProvider } from './contexts/AuthContext';
import { NavigationShell } from './components/NavigationShell';
import { RemoteModuleLoader } from './components/RemoteModuleLoader';
 
const moduleRegistry = {
  flightPlanning: {
    remote: 'flight_planning',
    module: './FlightPlanningApp',
    fallback: () => import('./fallbacks/FlightPlanningFallback'),
  },
  crewScheduling: {
    remote: 'crew_scheduling',
    module: './CrewSchedulingApp',
    fallback: () => import('./fallbacks/CrewSchedulingFallback'),
  },
  weatherBriefing: {
    remote: 'weather_briefing',
    module: './WeatherBriefingApp',
    fallback: () => import('./fallbacks/WeatherBriefingFallback'),
  },
  fuelManagement: {
    remote: 'fuel_management',
    module: './FuelManagementApp',
    fallback: () => import('./fallbacks/FuelManagementFallback'),
  },
};
 
export function App() {
  return (
    <AuthProvider>
      <NavigationShell>
        <ErrorBoundary
          fallback={<ModuleErrorState />}
          onError={reportModuleFailure}
        >
          <Suspense fallback={<ModuleLoadingSkeleton />}>
            <RemoteModuleLoader registry={moduleRegistry} />
          </Suspense>
        </ErrorBoundary>
      </NavigationShell>
    </AuthProvider>
  );
}

The RemoteModuleLoader component matched the current route to a registry entry, dynamically imported the remote module, and rendered it within the shell's layout. If a remote module failed to load — maybe that team's CDN had an issue — the error boundary caught the failure and rendered a static fallback without taking down the rest of the application.

Module Federation Configuration

Each micro-frontend had its own Webpack configuration that exposed its root component and declared shared dependencies. The critical piece was the shared configuration — getting this wrong meant users would download React three times.

js
// flight-planning/webpack.config.js
const { ModuleFederationPlugin } = require('webpack').container;
 
module.exports = {
  plugins: [
    new ModuleFederationPlugin({
      name: 'flight_planning',
      filename: 'remoteEntry.js',
      exposes: {
        './FlightPlanningApp': './src/FlightPlanningApp',
      },
      shared: {
        react: {
          singleton: true,
          requiredVersion: '^18.2.0',
          eager: false,
        },
        'react-dom': {
          singleton: true,
          requiredVersion: '^18.2.0',
          eager: false,
        },
        '@universal/design-system': {
          singleton: true,
          requiredVersion: '^4.0.0',
        },
        '@universal/auth-context': {
          singleton: true,
          requiredVersion: '^2.0.0',
        },
      },
    }),
  ],
};

Setting singleton: true on React ensured that only one copy of React existed at runtime, regardless of how many micro-frontends loaded. The requiredVersion field acted as a safety net — if a team accidentally bumped to React 19 before everyone else was ready, Module Federation would warn at build time rather than silently loading two React instances.

Shared State and Communication

We established three tiers of state sharing:

Tier 1: Authentication and User Context — Provided by the shell through React context. Every micro-frontend received the authenticated user, permissions, and a logout function. This was the only truly global state.

Tier 2: Cross-Module Communication — When flight planning needed to tell crew scheduling about a new flight, we used a custom event bus built on the browser's native CustomEvent API. No shared Redux store, no global state management library.

ts
// packages/event-bus/src/index.ts
type EventPayload = Record<string, unknown>;
 
interface TypedEvent<T extends EventPayload = EventPayload> {
  type: string;
  payload: T;
  source: string;
  timestamp: number;
}
 
class MicroFrontendEventBus {
  private listeners = new Map<string, Set<(event: TypedEvent) => void>>();
 
  emit<T extends EventPayload>(
    type: string,
    payload: T,
    source: string
  ): void {
    const event: TypedEvent<T> = {
      type,
      payload,
      source,
      timestamp: Date.now(),
    };
 
    const handlers = this.listeners.get(type);
    if (handlers) {
      handlers.forEach((handler) => {
        try {
          handler(event);
        } catch (error) {
          console.error(
            `Event handler error in ${type}:`,
            error
          );
        }
      });
    }
  }
 
  on<T extends EventPayload>(
    type: string,
    handler: (event: TypedEvent<T>) => void
  ): () => void {
    if (!this.listeners.has(type)) {
      this.listeners.set(type, new Set());
    }
    this.listeners.get(type)!.add(handler as (event: TypedEvent) => void);
 
    // Return unsubscribe function
    return () => {
      this.listeners.get(type)?.delete(handler as (event: TypedEvent) => void);
    };
  }
}
 
// Singleton instance attached to window for cross-module access
export const eventBus =
  (window as any).__MFE_EVENT_BUS__ ||
  ((window as any).__MFE_EVENT_BUS__ = new MicroFrontendEventBus());

Tier 3: Module-Internal State — Each micro-frontend managed its own state however it wanted. Flight planning used Zustand, crew scheduling used Redux Toolkit, and the weather module used plain React context. This autonomy was a feature, not a bug — teams picked tools that fit their problem domain.

Independent Deployment Pipeline

Each micro-frontend had its own CI/CD pipeline that built, tested, and deployed to a versioned path on our CDN. The shell application's module registry pointed to these CDN URLs, and we used a configuration service to manage which version of each micro-frontend was active in production.

json
{
  "modules": {
    "flight_planning": {
      "url": "https://cdn.universal.aero/mfe/flight-planning/v2.14.3/remoteEntry.js",
      "integrity": "sha384-abc123...",
      "enabled": true
    },
    "crew_scheduling": {
      "url": "https://cdn.universal.aero/mfe/crew-scheduling/v3.2.1/remoteEntry.js",
      "integrity": "sha384-def456...",
      "enabled": true
    }
  }
}

This gave us instant rollback capability. If crew scheduling v3.2.1 had a bug, we updated the configuration to point back to v3.2.0, and the next page load would pick up the older version. No redeployment of the shell or any other module required.

Key Decisions & Trade-offs

Module Federation over iframes. Iframes provide the strongest isolation but make shared state and consistent UX nearly impossible. Our users needed seamless navigation between modules, and we needed shared authentication. Module Federation gave us runtime composition with acceptable isolation boundaries.

Custom event bus over shared state management. We considered a shared Redux store but rejected it because it would have created tight coupling between teams. If the crew scheduling team changed their state shape, flight planning's selectors could break. The event bus enforced a message-passing contract — teams agreed on event types and payloads, and everything else was internal.

Versioned CDN paths over latest-tag deployments. We initially considered having each module deploy to a fixed URL (always the latest version). But this made rollbacks slow and debugging harder. Versioned paths meant we could have v2.14.2 and v2.14.3 coexisting on the CDN, and switching between them was a config change.

Monorepo for shared packages, polyrepo for micro-frontends. The design system, event bus, auth context, and TypeScript types lived in a shared monorepo. Each micro-frontend lived in its own repository. This gave teams full autonomy over their build pipelines and dependency versions while keeping shared contracts in one place.

CSS Modules over global stylesheets. With five teams writing CSS, class name collisions were inevitable with global styles. We mandated CSS Modules for all micro-frontends. The design system provided tokens as CSS custom properties, and each module scoped its styles through CSS Modules.

Results & Outcomes

The most immediate impact was on deployment cadence. Teams went from waiting for the biweekly release train to deploying whenever their changes were ready. The crew scheduling team, which had the most active feature development, went from two deployments per month to multiple deployments per week.

Failure isolation worked exactly as designed. During one incident, the weather briefing service's API went down. The weather micro-frontend showed an error state, but flight planning, crew scheduling, and fuel management continued operating normally. Under the old monolith, that API failure would have triggered cascading errors across the entire application.

Developer onboarding improved significantly. New engineers could start contributing to one micro-frontend without understanding the entire platform. The flight planning module's codebase was roughly one-fifth the size of the original monolith, which made it far more approachable.

The shared design system became more disciplined. Because changes to the design system affected all five micro-frontends, teams became more intentional about design system updates. We introduced a visual regression testing pipeline for the design system that caught breaking changes before they reached any micro-frontend.

Build times dropped considerably. Instead of building the entire monolith, each micro-frontend built only its own code plus shared packages. What had been a lengthy build-and-test cycle for the monolith became much shorter per-module builds.

What I'd Do Differently

Start with stricter API contracts. We defined event bus message types informally at first — a shared TypeScript interface in a Google Doc. This led to subtle bugs when teams interpreted the contract differently. I'd invest in a schema registry from day one, with automated contract testing in CI.

Invest in a better local development experience earlier. Running all five micro-frontends locally was resource-intensive and slow to start. We eventually built a "mock remote" system that let developers run their module against stubs of other modules, but we should have built this before the first micro-frontend migration, not after the third.

Consider server-side composition. Our approach was entirely client-side — the browser loaded the shell, which loaded remote modules. This meant the initial page load required downloading the shell, then the remote entry, then the module code. Server-side composition using something like Module Federation in Next.js or a reverse proxy stitching layer would have improved first-load performance.

Establish shared testing patterns. Each team developed their own testing approach. Some had extensive integration tests, others relied on unit tests. We should have established a shared contract testing framework and minimum coverage standards as part of the micro-frontend governance model.

FAQ

When should you use micro-frontends?

Micro-frontends make sense when multiple teams own distinct sections of a large application and need independent deployment cycles. If a single team maintains the entire frontend or the app is small, a well-structured monolith is simpler and more efficient. In our case at Universal Weather, we had five teams working on clearly separated domain modules — flight planning, crew scheduling, weather, fuel, and customer portal. The coordination overhead of the monolith was slowing everyone down. The key signals that pushed us toward micro-frontends were: merge conflicts across team boundaries happening weekly, a two-week release train that blocked urgent fixes, and one team's bug taking down modules they didn't even own. If you're not experiencing these pain points, micro-frontends add complexity you probably don't need.

How do micro-frontends communicate with each other?

Common patterns include custom events, a shared event bus, URL-based state via query parameters, or a lightweight shared store. The key is keeping coupling minimal so each micro-frontend can be developed and deployed independently. We chose a typed event bus built on the browser's native CustomEvent API. When a dispatcher creates a new flight in the flight planning module, it emits a FLIGHT_CREATED event with the flight details. The crew scheduling module listens for that event and refreshes its assignment view. The contract between modules is the event type and payload shape — nothing else. We enforced this contract through a shared TypeScript interface package that both the emitting and consuming modules depended on. This pattern kept teams decoupled while still enabling the cross-module workflows that users expected.

What are the performance implications of micro-frontends?

The main risks are duplicate dependencies and additional network requests. Module Federation's shared scope mitigates duplication by letting micro-frontends share React, common libraries, and design system packages at runtime rather than bundling them separately. In our architecture, React, React DOM, and the design system were loaded once by the shell and shared with all remote modules. The performance cost we couldn't fully eliminate was the sequential loading: the browser loads the shell, then fetches remoteEntry.js for the active module, then loads the module's chunks. We mitigated this with <link rel="preload"> hints in the shell's HTML for the most commonly visited modules and by keeping each module's initial bundle small through aggressive code splitting. We also set up performance budgets in CI — if a module's entry chunk exceeded 100KB gzipped, the build failed with a warning to split further.

Collaboration

Need help with a project?

Let's Build It

I help startups and established companies design, build, and scale world-class digital products. From deep technical architecture to pixel-perfect UI — let's bring your vision to life.

SH

Article Author

Sadam Hussain

Senior Full Stack Developer

Senior Full Stack Developer with over 7 years of experience building React, Next.js, Node.js, TypeScript, and AI-powered web platforms.

Related Articles

Optimizing Core Web Vitals for e-Commerce
Mar 01, 202610 min read
SEO
Performance
Next.js

Optimizing Core Web Vitals for e-Commerce

Our journey to scoring 100 on Google PageSpeed Insights for a major Shopify-backed e-commerce platform.

Building an AI-Powered Interview Feedback System
Feb 22, 20269 min read
AI
LLM
Feedback

Building an AI-Powered Interview Feedback System

How we built an AI-powered system that analyzes mock interview recordings and generates structured feedback on communication, technical accuracy, and problem-solving approach using LLMs.

Migrating from Pages to App Router
Feb 15, 20268 min read
Next.js
Migration
Case Study

Migrating from Pages to App Router

A detailed post-mortem on migrating a massive enterprise dashboard from Next.js Pages Router to the App Router.