Next Frontiers in Nx Workspace: An Advanced Developer’s Guide
1. Introduction to Next Frontiers in Nx Workspace
Welcome to the “Next Frontiers in Nx Workspace” guide. This document is crafted for experienced Nx users who have already mastered the fundamentals and intermediate-to-advanced concepts of monorepo management with Nx. Our journey together will delve into the bleeding edge of Nx capabilities, equipping you with the knowledge and practical skills to tackle the most complex challenges in modern software development.
What are these “Next Frontiers” topics?
The “Next Frontiers” encompass advanced paradigms and tools that extend Nx Workspace beyond conventional monorepo management. We will explore:
- Nx and AI Integration: Leveraging artificial intelligence to create smarter, more autonomous development and CI/CD workflows.
- Advanced Module Federation Patterns: Deep diving into dynamic remote loading, robust versioning, and strategies for overcoming complex micro-frontend challenges.
- Complex Monorepo Refactoring: Mastering the art of migrating legacy projects into Nx and strategically decomposing monolithic applications within an Nx monorepo.
- Security in Monorepos: Implementing comprehensive security measures, from dependency vulnerability scanning to secrets management and access control within a monorepo context.
- Enterprise Nx Cloud Features: Optimizing CI/CD pipelines with advanced Distributed Task Execution (DTE), interpreting build analytics, and managing custom artifacts.
- Advanced Production Deployment: Crafting intelligent, granular deployment strategies for affected projects and coordinating complex cross-project releases.
- Infrastructure-as-Code (IaC) within Nx: Integrating and managing infrastructure definitions directly within your monorepo.
Why are these topics important for an Nx expert?
As an Nx expert, understanding these topics is crucial for:
- Driving Innovation: Implementing cutting-edge AI-powered tools to automate tedious tasks and accelerate development cycles.
- Scaling Micro-Frontends: Designing highly resilient, scalable, and independently deployable micro-frontend architectures for large enterprises.
- Managing Technical Debt: Strategically evolving existing systems by effectively refactoring and decomposing large codebases.
- Fortifying Security: Ensuring the integrity and security of your monorepo and its deployment pipelines, a non-negotiable in enterprise environments.
- Optimizing Performance & Reliability: Squeezing maximum efficiency out of your CI/CD processes and ensuring robust, coordinated deployments.
- Adopting Polyglot Architectures: Seamlessly integrating diverse technology stacks and infrastructure definitions within a single, unified workspace.
These skills are vital for architects, tech leads, and senior developers who are responsible for designing, implementing, and maintaining large-scale, complex monorepos in high-performance organizations.
Prerequisites for this document
This document assumes you have:
- A solid understanding of Nx Workspace fundamentals (project graph, executors, generators, caching).
- Experience with Nx in practical, non-trivial projects.
- Familiarity with modern web development concepts (e.g., React/Angular/Vue, Node.js, Webpack).
- Working knowledge of Git and common CI/CD concepts (e.g., GitHub Actions, GitLab CI).
- Basic understanding of cloud platforms (e.g., AWS, Azure, GCP) if you plan to follow the deployment examples.
Let’s embark on this advanced journey!
2. Nx and AI Integration: Smarter Development Workflows
The landscape of software development is being revolutionized by AI. Nx is at the forefront of this integration, leveraging AI to provide a deeper understanding of your monorepo and automate complex tasks, from code generation to self-healing CI pipelines.
What is it? Explanation of how Nx integrates with AI assistants and its impact on development.
Nx integrates with AI coding assistants primarily through the Nx Model Context Protocol (MCP) server. This protocol, an open standard, allows AI models to interact with your development environment by providing them with rich, structured metadata about your Nx workspace.
The impact on development is profound:
- Deep Workspace Understanding: AI assistants gain a comprehensive view of your monorepo’s architecture, project relationships, dependencies, and ownership. This allows them to reason beyond individual files and understand the “big picture.”
- Real-time Terminal Integration: AI can monitor your terminal output, identify errors, and combine this with codebase context to suggest fixes or explanations.
- Enhanced Code Generation & Refactoring: AI-powered generators can scaffold code that adheres to your team’s best practices, and AI can provide intelligent suggestions for refactoring and performance optimization.
- CI Pipeline Context: AI assistants can access CI/CD failure logs and context from Nx Cloud to diagnose issues and propose fixes, drastically reducing “time to green.”
- Cross-Project Impact Analysis: AI can help understand the implications of changes across your entire monorepo, crucial for large-scale refactorings.
In essence, Nx transforms generic AI code helpers into architecturally-aware collaborators that understand your specific workspace and can make intelligent, context-aware decisions.
Self-Healing CI
Nx Cloud’s Self-Healing CI is an AI-powered system that automatically detects, analyzes, and proposes fixes for CI failures. This dramatically reduces the time developers spend debugging simple CI errors, improving “time to green” and keeping teams focused on feature development.
Detailed explanation of how Nx Cloud’s Self-Healing CI works.
When a PR is pushed and CI tasks fail, Nx Cloud’s Self-Healing CI initiates the following sequence:
- Failure Detection: Nx Cloud automatically identifies the failing tasks.
- AI Agent Analysis: An AI agent starts, examining error logs, leveraging Nx’s project graph for codebase structure and dependency context, and pinpointing the root cause.
- Fix Proposal: The AI generates a fix and presents it to the developer, typically via Nx Console notifications in the IDE or as a comment on the GitHub PR.
- Parallel Validation (Optional but recommended): Concurrently, the AI agent re-runs the originally failed tasks with the proposed changes to verify the fix.
- Human Review and Approval: The developer reviews the proposed fix (including a Git diff) and can approve or reject it.
- Automatic PR Update: Upon approval, the AI agent automatically commits the fix to the original PR as a new commit.
- Full CI Re-run: The complete CI pipeline runs again with the applied fix, aiming for a green status.
This “human-in-the-loop” approach ensures that AI doesn’t make autonomous changes but provides working fixes for review, maintaining developer control while automating tedious debugging.
Hands-on Example: Configure a GitHub Actions workflow (or similar CI) to enable nx fix-ci and demonstrate its operation with an intentional lint/test failure, showing the automated fix proposal. Include full ci.yml and expected CLI/GitHub outputs.
Prerequisites:
- An Nx Workspace connected to Nx Cloud. If not, run
npx nx@latest connectand follow the prompts. - Ensure AI features are enabled in your Nx Cloud dashboard (Organization Settings > AI features).
- Have Nx Console installed in your editor for in-editor notifications.
Step 1: Create a new Nx workspace and application.
# Create a new Nx workspace
npx create-nx-workspace@latest self-healing-demo --preset=react-standalone --no-nxcloud --no-install
cd self-healing-demo
npm install
# Generate a React application
npx nx g @nx/react:app my-app --directory=apps/my-app --unitTestRunner=jest --e2eTestRunner=cypress --style=css --bundler=webpack --projectNameAndRootFormat=as-provided
# Connect to Nx Cloud (if not already done)
# This will guide you through connecting to a new or existing Nx Cloud workspace.
# Make sure to enable AI features in your Nx Cloud organization settings.
npx nx@latest connect
Step 2: Introduce an intentional lint failure in apps/my-app/src/app/app.tsx.
Modify apps/my-app/src/app/app.tsx to include a lint error, for example, by using single quotes instead of double quotes if your lint rules enforce double quotes (or vice-versa), or by introducing an unused variable.
// apps/my-app/src/app/app.tsx
import styles from './app.module.css';
import NxWelcome from './nx-welcome';
export function App() {
// Intentionally introduce a lint error: using single quotes
const greeting = 'Hello Nx Expert!';
return (
<>
<NxWelcome title="my-app" />
<div />
</>
);
}
export default App;
Step 3: Create a GitHub Actions workflow (.github/workflows/ci.yml).
This workflow will run nx affected:lint and then nx-cloud fix-ci if there are failures.
# .github/workflows/ci.yml
name: CI
on:
push:
branches:
- main
- master
pull_request:
types: [opened, synchronize, reopened, ready_for_review]
jobs:
main:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
# This is important! The self-healing action needs write permissions.
token: ${{ secrets.GITHUB_TOKEN }}
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Setup Nx Cloud
run: npx nx-cloud start-ci-run
- name: Run affected lint
# This will fail due to the intentional lint error
run: npx nx affected --target=lint --max-parallel=3 --configuration=ci
continue-on-error: true # Allow CI to continue even if lint fails
# Important: This step must run at the end with if: always()
# to ensure it executes even when previous steps fail.
- name: Nx Cloud Self-Healing CI
run: npx nx-cloud fix-ci
if: always()
env:
# Your Nx Cloud access token (read/write access for AI features)
NX_CLOUD_ACCESS_TOKEN: ${{ secrets.NX_CLOUD_ACCESS_TOKEN }}
Step 4: Add Nx Cloud access token to GitHub Secrets.
- Go to your GitHub repository settings.
- Navigate to “Secrets and variables” -> “Actions”.
- Click “New repository secret”.
- Name the secret
NX_CLOUD_ACCESS_TOKEN. - Generate an Nx Cloud access token from your Nx Cloud workspace settings (Settings -> Tokens). Ensure it has read/write access for AI features.
- Paste the generated token into the secret value.
Step 5: Commit the changes and open a Pull Request.
git add .
git commit -m "feat: Introduce intentional lint error for self-healing CI demo"
git push origin <your-branch-name>
Now, create a Pull Request on GitHub from <your-branch-name> to main.
Expected Outputs:
- GitHub Actions Run:
- The “Run affected lint” step will fail because of the intentional lint error.
- The “Nx Cloud Self-Healing CI” step will run (due to
if: always()). - Nx Cloud will detect the failure, an AI agent will analyze it, and propose a fix.
- Nx Console Notification (if installed and configured):
- You will receive a notification in your IDE (VS Code, Cursor, IntelliJ) about the CI failure and the proposed fix. Clicking it will show the failed task log and the Git diff of the proposed fix.
- GitHub Pull Request:
- A comment will appear on your Pull Request from the Nx Cloud bot, stating that CI failed and a fix has been generated. It will provide a link to review the fix in Nx Cloud’s web UI.
- Example: “CI failed. A fix has been generated. View Fix”
- Review and Apply:
- Click the “View Fix” link. You’ll see the proposed changes (e.g.,
'Hello Nx Expert!'changing to"Hello Nx Expert!"). - You’ll have an option to “Approve & Apply” the fix.
- Upon approval, the Nx Cloud bot will push a new commit to your PR with the fix (e.g.,
fix(ci): Automated fix for lint failure). - A new CI run will automatically trigger, and this time, the lint step should pass, making your PR green.
- Click the “View Fix” link. You’ll see the proposed changes (e.g.,
This hands-on demonstration showcases how Self-Healing CI proactively identifies and rectifies common CI failures, drastically reducing developer intervention and improving development velocity.
AI-Enhanced Code Generation & Refactoring (if public features exist by 2025-08-31)
As of August 31, 2025, Nx offers robust, publicly available AI-driven capabilities that significantly enhance code generation and refactoring within a monorepo. These features primarily leverage the Nx Model Context Protocol (MCP) server, integrating with popular AI assistants.
Discuss current and emerging capabilities for AI assistance in Nx (e.g., smart generators, automated refactoring suggestions, performance optimization).
Current Capabilities (as of late 2025):
- Workspace-Aware Code Generation:
- Smart Generators: AI assistants, powered by the Nx MCP server, can now recommend and even pre-fill parameters for Nx generators. Instead of AI generating code from scratch, it intelligently uses existing, proven Nx generators, ensuring consistency with organizational best practices and architecture.
- Contextual Scaffolding: When asked to create a new feature (e.g., “Create a new React library for user profiles”), the AI can analyze your workspace (project graph, tags, existing libraries) to suggest the optimal location and generator options, then execute the generator and help integrate the new code.
- Architectural Understanding for Refactoring:
- Cross-Project Impact Analysis: AI can leverage Nx’s project graph to understand dependencies. When you propose a change to a shared library, the AI can identify all affected projects, visualize the impact, and suggest safer refactoring strategies.
- Automated Migration Assistance: While full automated refactoring of arbitrary code is still emerging, AI can assist with Nx migrations by providing enhanced context for updates to dependencies and APIs.
- Real-time Error Debugging & Explanation:
- Terminal Awareness: AI assistants can monitor terminal output during development, identify failing tests or build errors, and combine this with deep workspace context (code, project structure) to explain issues and suggest immediate fixes.
- Documentation-Aware Configuration: When configuring Nx features (e.g., Nx Release), AI can query up-to-date Nx documentation via MCP to provide accurate, hallucination-free configuration snippets.
Emerging Capabilities (early 2026 and beyond):
- Autonomous CI Optimization: AI agents learning from CI patterns to automatically scale agents and optimize build times on Nx Cloud.
- Cross-Repository Intelligence (Nx Polygraph): Extending AI context across multiple repositories in an organization, enabling system-wide refactoring and analysis, even for interconnected polyrepos.
- Agentic Refactorings: Autonomous AI agents performing large-scale migrations and tech debt cleanup across an entire organization, understanding cross-repo dependencies.
- Proactive Performance Optimization: AI identifying potential performance bottlenecks in code or build configurations based on usage patterns and historical data, and suggesting optimizations.
Hands-on Example: If there are any publicly available AI-driven Nx tools, demonstrate their usage with commands and expected outputs. If not, discuss conceptual examples.
As of late 2025, the primary publicly available AI-driven Nx tools are the Nx Model Context Protocol (MCP) server integration with AI assistants (like GitHub Copilot, Cursor, Claude) and Nx Cloud’s Self-Healing CI. We demonstrated Self-Healing CI above. Here, we’ll focus on the AI-enhanced code generation aspect through the MCP.
Conceptual Example: AI-Enhanced Library Generation
Imagine you want to create a new feature library in your Nx monorepo for handling user settings.
Prerequisites:
- An Nx Workspace with
@nx/react(or your preferred framework plugin) installed. - Nx Console installed in your editor (e.g., VS Code).
- A compatible AI assistant (e.g., GitHub Copilot Chat, Cursor) configured with the Nx MCP server. (This is usually done automatically by Nx Console or can be set up manually in
.vscode/mcp.json).
Scenario: You want to add a new feature to manage user settings.
Developer Prompt to AI Assistant:
"Create a new React library in the `packages/user/feat-settings` folder, and call the library `user-settings`. After that, connect it to the main `admin-dashboard` application's routing."
AI Assistant’s Internal Process (powered by Nx MCP):
- Tool Call:
nx_generators: The AI assistant first queries thenx_generatorstool to list available generators. - Generator Selection: It identifies the
@nx/react:librarygenerator as the most appropriate. - Schema Retrieval: It then uses
nx_generator_schemato understand the available options for@nx/react:library. - Contextual Parameter Pre-filling: Based on the prompt and the workspace context (e.g., existing
admin-dashboardapp, common patterns for feature libraries), it determines the following parameters:name:user-settingsdirectory:packages/user/feat-settingsprojectNameAndRootFormat:as-provided- Potentially other options like
unitTestRunner,linter,style.
- Generator UI Invocation: The AI assistant then “opens” the Nx Console Generate UI with these pre-filled values. This allows the human developer to review and adjust before execution.
- Human Review & Execution: The developer sees the pre-filled form in Nx Console:
The developer clicks “Generate”.@nx/react:library - name: user-settings - directory: packages/user/feat-settings - ... (other pre-filled options) - Integration & Follow-up: After the library is generated, the AI assistant might then suggest how to connect it to the
admin-dashboard’s routing, potentially by:- Suggesting code to add a route in
apps/admin-dashboard/src/app/app.tsx(orapp.routes.tsfor Angular). - Identifying necessary imports.
- Offering to run a
nx lintto ensure the integration is correct.
- Suggesting code to add a route in
Expected CLI/Nx Console Outputs:
CLI output after generator execution:
NX Generating React Library "user-settings" CREATE packages/user/feat-settings/project.json CREATE packages/user/feat-settings/src/index.ts CREATE packages/user/feat-settings/src/lib/user-settings.ts CREATE packages/user/feat-settings/src/lib/user-settings.module.css CREATE packages/user/feat-settings/src/lib/user-settings.spec.ts UPDATE tsconfig.base.json NX Successfully ran generator @nx/react:library for user-settingsNx Console follow-up (example - depending on AI assistant):
AI Assistant: "I've created the `user-settings` library. Now, let's integrate it into your `admin-dashboard` application's routing. Would you like me to add a route for `/settings` that lazily loads this new library? " [Yes, add route] [No, I'll do it manually]
This interaction demonstrates how Nx’s structured generators and project graph, combined with the MCP, allow AI assistants to perform complex, context-aware actions, moving beyond simple code snippets to truly understand and modify your monorepo architecture. The “human-in-the-loop” approach ensures quality and adherence to established patterns.
3. Advanced Module Federation Patterns
Module Federation, powered by Webpack, revolutionizes how micro-frontends (MFEs) are built and integrated, enabling true independent deployment and shared codebases at runtime. Nx provides first-class support for Module Federation, and this section dives into advanced patterns for highly scalable and resilient MFE architectures.
Recap: Briefly touch upon the basics of Module Federation.
At its core, Module Federation allows JavaScript applications to dynamically load code from another application (a “remote”) at runtime. The key concepts are:
- Host (Shell) Application: The main application that consumes one or more remote applications.
- Remote Application: An application that exposes (shares) modules for consumption by a host.
- Exposed Modules: Specific files or components within a remote application that are made available to hosts.
- Shared Modules: Libraries (e.g., React, Angular, lodash) that can be shared between hosts and remotes to reduce bundle size and avoid version conflicts.
The primary benefit is that remotes can be developed and deployed independently, yet function as part of a cohesive whole, making it ideal for micro-frontend architectures.
Dynamic Remote Loading & Micro-Frontend Orchestration
Dynamic remote loading takes Module Federation a step further by allowing the host application to determine which remotes to load, and from where, at runtime. This provides immense flexibility, enabling scenarios like A/B testing, feature toggles, and multi-tenant architectures where different users see different sets of micro-frontends without a host rebuild.
Explanation of dynamic loading vs. static, runtime discovery.
- Static Module Federation: The host application’s
webpack.config.js(ormodule-federation.config.tsin Nx) explicitly lists all remotes and their URLs at build time. If a remote’s URL changes or a new remote is added, the host application must be rebuilt and redeployed.// Example: static remote definition const config: ModuleFederationConfig = { name: 'host', remotes: [ ['remote1', 'http://localhost:4201/remoteEntry.js'], ['remote2', 'http://localhost:4202/remoteEntry.js'], ], // ... }; - Dynamic Module Federation (Runtime Discovery): The host application does not know the URLs of its remotes at build time. Instead, it discovers them at runtime. This is achieved by:
- Fetching a Configuration: The host fetches a JSON file, hits an API endpoint, or consults an in-memory registry to get the remote definitions (name and URL).
- Runtime Initialization: Using the
@module-federation/enhanced/runtime(or similar utility), the host dynamically initializes the remotes. - Loading: Once initialized, the host can load modules from these dynamically configured remotes.
This approach provides “build once, deploy everywhere” capability, as the host artifact remains the same across environments, with only the runtime configuration changing. It also allows for adding or removing micro-frontends without touching the host.
Hands-on Example: Build an Nx host application that dynamically loads remotes based on a runtime configuration (e.g., JSON file fetched at runtime or a simple in-memory registry). Demonstrate how to add/remove remotes without rebuilding the host. Include full code, configuration, and execution steps.
We will use an in-memory registry for simplicity, demonstrating the core principle of dynamic loading. This can easily be extended to fetch from a JSON file or API.
Step 1: Create a new Nx workspace and a host application.
# Create a new Nx workspace
npx create-nx-workspace@latest dynamic-mf-demo --preset=react-standalone --no-nxcloud --no-install
cd dynamic-mf-demo
npm install
# Add React plugin if not already installed by preset
npx nx add @nx/react --preset=react-standalone
# Generate a host application
npx nx g @nx/react:host shell --directory=apps/shell --projectNameAndRootFormat=as-provided --bundler=webpack --style=css
Step 2: Generate two remote applications.
npx nx g @nx/react:remote remote-alpha --directory=apps/remote-alpha --host=shell --projectNameAndRootFormat=as-provided --bundler=webpack --style=css
npx nx g @nx/react:remote remote-beta --directory=apps/remote-beta --host=shell --projectNameAndRootFormat=as-provided --bundler=webpack --style=css
Note: The --host=shell flag here adds static remote entries to apps/shell/module-federation.config.ts. We will remove these to set up dynamic loading manually.
Step 3: Modify apps/shell/module-federation.config.ts to remove static remotes.
Open apps/shell/module-federation.config.ts and ensure the remotes array is empty.
// apps/shell/module-federation.config.ts
import { ModuleFederationConfig } from '@nx/webpack/module-federation';
const config: ModuleFederationConfig = {
name: 'shell',
remotes: [], // Make sure this is empty for dynamic loading
};
export default config;
Step 4: Create a dynamic remote registry and loader in apps/shell/src/remotes-config.ts.
This file will simulate runtime discovery of remotes.
// apps/shell/src/remotes-config.ts
import { init } from '@module-federation/enhanced/runtime';
// In a real application, this would be fetched from an API or a CDN-hosted JSON
interface RemoteDefinition {
name: string;
entry: string; // The URL to the remoteEntry.js
}
const ALL_REMOTES: RemoteDefinition[] = [
{ name: 'remote-alpha', entry: 'http://localhost:4201/remoteEntry.js' },
{ name: 'remote-beta', entry: 'http://localhost:4202/remoteEntry.js' },
];
export async function initializeDynamicRemotes(
activeRemoteNames: string[]
): Promise<void> {
const remotesToLoad = ALL_REMOTES.filter(remote =>
activeRemoteNames.includes(remote.name)
);
console.log('Dynamically initializing remotes:', remotesToLoad.map(r => r.name));
// Initialize the Module Federation runtime with dynamic remotes
init({
name: 'shell',
remotes: remotesToLoad.map(remote => ({
name: remote.name,
entry: remote.entry,
})),
// shared: [], // Define shared dependencies if needed
});
}
// Helper to load a remote module dynamically
export const loadRemote = async (remoteName: string, exposedModule: string) => {
try {
// The 'init' call above registers these, so 'import' works
// For enhanced runtime, you might use specific methods or just standard dynamic import.
// The standard dynamic import path will resolve correctly after `init`.
return await import(`${remoteName}/${exposedModule}`);
} catch (error) {
console.error(`Failed to load remote module ${exposedModule} from ${remoteName}:`, error);
throw error;
}
};
Step 5: Modify apps/shell/src/app/app.tsx to use dynamic loading.
This will include conditional rendering and a basic UI to switch between loaded remotes.
// apps/shell/src/app/app.tsx
import { useEffect, useState, lazy, Suspense } from 'react';
import { initializeDynamicRemotes, loadRemote } from '../remotes-config';
// Define a type for our loaded remote components
type RemoteComponent = React.LazyExoticComponent<React.ComponentType<any>>;
export function App() {
const [activeRemote, setActiveRemote] = useState<string | null>(null);
const [loadedRemotes, setLoadedRemotes] = useState<Record<string, RemoteComponent>>({});
useEffect(() => {
// Initially load all available remotes (or a default set)
// In a real scenario, this 'activeRemoteNames' could come from user preferences, feature flags, etc.
const initialRemotes = ['remote-alpha', 'remote-beta'];
initializeDynamicRemotes(initialRemotes).then(() => {
// Once initialized, we can define our lazy components
const newLoadedRemotes: Record<string, RemoteComponent> = {};
initialRemotes.forEach(remoteName => {
// Assuming each remote exposes a './Module'
newLoadedRemotes[remoteName] = lazy(() => loadRemote(remoteName, 'Module').then(m => ({ default: m.default })));
});
setLoadedRemotes(newLoadedRemotes);
if (initialRemotes.length > 0) {
setActiveRemote(initialRemotes[0]); // Set a default active remote
}
});
}, []);
const RemoteComponentToRender = activeRemote ? loadedRemotes[activeRemote] : null;
const handleRemoteSwitch = (remoteName: string) => {
setActiveRemote(remoteName);
};
if (Object.keys(loadedRemotes).length === 0) {
return <div>Loading shell and remote configurations...</div>;
}
return (
<div>
<h1>Dynamic Module Federation Host (Shell)</h1>
<nav>
{Object.keys(loadedRemotes).map(remoteName => (
<button
key={remoteName}
onClick={() => handleRemoteSwitch(remoteName)}
style={{ fontWeight: activeRemote === remoteName ? 'bold' : 'normal', margin: '0 5px' }}
>
Load {remoteName}
</button>
))}
<button onClick={() => setActiveRemote(null)} style={{ margin: '0 5px' }}>
Unload All
</button>
</nav>
<div style={{ border: '2px solid blue', padding: '20px', marginTop: '20px' }}>
<h2>Currently Active Remote: {activeRemote || 'None'}</h2>
<Suspense fallback={<div>Loading remote content...</div>}>
{RemoteComponentToRender && <RemoteComponentToRender />}
</Suspense>
</div>
</div>
);
}
export default App;
Step 6: Modify apps/remote-alpha/src/app/app.tsx and apps/remote-beta/src/app/app.tsx to export a default component.
For apps/remote-alpha/src/app/app.tsx:
// apps/remote-alpha/src/app/app.tsx
import styles from './app.module.css';
export function App() {
return (
<div className={styles['container']}>
<h2>Hello from Remote Alpha!</h2>
<p>This content is dynamically loaded.</p>
</div>
);
}
export default App;
For apps/remote-beta/src/app/app.tsx:
// apps/remote-beta/src/app/app.tsx
import styles from './app.module.css';
export function App() {
return (
<div className={styles['container']}>
<h2>Greetings from Remote Beta!</h2>
<p>Another dynamically loaded module.</p>
</div>
);
}
export default App;
Step 7: Adjust apps/remote-alpha/src/main.ts and apps/remote-beta/src/main.ts to export the App component as a default (if not already).
The default React remote app generator usually exports App directly. Ensure apps/{remote-name}/src/main.ts looks something like this (the bootstrap file might already do this for Module Federation):
// apps/remote-alpha/src/main.ts (or bootstrap.ts)
import { App } from './app/app'; // Make sure to import the component
export default App; // Export the component as the default module
And similarly for remote-beta. If your remote-entry.ts already handles this, you might not need to change main.ts. For React Module Federation remotes, typically remote-entry.ts acts as the exposed module, and it should expose the root component.
Ensure apps/remote-alpha/module-federation.config.ts and apps/remote-beta/module-federation.config.ts expose a Module:
// apps/remote-alpha/module-federation.config.ts
import { ModuleFederationConfig } from '@nx/webpack/module-federation';
const config: ModuleFederationConfig = {
name: 'remote-alpha',
exposes: {
'./Module': './src/app/app.tsx', // Expose the root component
},
};
export default config;
And similarly for remote-beta.
Step 8: Run the applications.
Open three terminal windows:
- Terminal 1 (Remote Alpha):
npx nx serve remote-alpha --port=4201 - Terminal 2 (Remote Beta):
npx nx serve remote-beta --port=4202 - Terminal 3 (Shell/Host):
npx nx serve shell --port=4200
Expected Outputs:
- Navigate your browser to
http://localhost:4200. - You should see the “Dynamic Module Federation Host (Shell)” title and two buttons: “Load remote-alpha” and “Load remote-beta”.
- By default, “remote-alpha” should be loaded and you’ll see “Hello from Remote Alpha! This content is dynamically loaded.” within the blue border.
- Click the “Load remote-beta” button. The content within the blue border should switch to “Greetings from Remote Beta! Another dynamically loaded module.” Crucially, the shell application itself was not rebuilt or redeployed. Only the JavaScript for
remote-betawas fetched and rendered at runtime. - Click “Unload All” to clear the remote content.
Demonstrating Adding/Removing Remotes Without Rebuilding the Host:
To demonstrate adding/removing remotes without rebuilding the host, you would:
Stop only the
shellapplication. Keepremote-alphaandremote-betarunning.Modify
apps/shell/src/remotes-config.tsto include a new, hypothetical remote (e.g.,remote-gamma) or comment out one of the existing remotes from theALL_REMOTESarray. For this example, let’s removeremote-beta.// apps/shell/src/remotes-config.ts // ... const ALL_REMOTES: RemoteDefinition[] = [ { name: 'remote-alpha', entry: 'http://localhost:4201/remoteEntry.js' }, // { name: 'remote-beta', entry: 'http://localhost:4202/remoteEntry.js' }, // Commented out! ]; // ...Restart only the
shellapplication.npx nx serve shell --port=4200Observe the result: When you navigate to
http://localhost:4200, the “Load remote-beta” button will be gone, and you can only interact with “remote-alpha”. The shell was restarted, but no build process was required for the shell to adapt to the change in its list of available remotes. In a real-world scenario, thisALL_REMOTESarray would come from an external, mutable source (like a database or a configuration service), allowing for true dynamic updates without any code changes or restarts to the host application’s artifact.
This example clearly illustrates the power of dynamic Module Federation in a micro-frontend architecture, allowing for flexible runtime orchestration without costly host redeployments.
Versioning Strategies for Remotes
Managing versions in a Module Federation setup, especially for shared libraries and components, is critical to avoid “DLL hell” scenarios and ensure compatibility between hosts and remotes that might be deployed independently.
Discussion of challenges with versioning shared modules and remotes.
- Dependency Duplication: If a host and a remote both depend on, say, React, but specify different versions or load them independently, the application could end up with two copies of React in the bundle, increasing size and potentially causing issues with React’s internal state management.
- Version Mismatch: A host might expect
lodash@4.0.0, while a remote might expose a module that internally relies onlodash@5.0.0. This mismatch can lead to subtle bugs or runtime errors. - Singleton Issues: Libraries that are designed to be singletons (e.g., a state management store like Redux or Zustand, or a UI theme provider) will break if multiple instances are loaded due to version conflicts.
- Breaking Changes: When a shared library introduces a breaking change, simply updating it in one remote might break other remotes or the host that still rely on the old API.
- Independent Deployment vs. Compatibility: The desire for independent deployment of micro-frontends conflicts with the need for strict compatibility for shared dependencies.
Nx’s Module Federation support, especially with @module-federation/enhanced, provides mechanisms to manage these challenges effectively through clever Webpack configurations. Key strategies involve:
sharedConfiguration: This Webpack Module Federation plugin option tells Webpack which modules to share and how to handle version conflicts.singleton: Ensures only one instance of the shared module exists in the runtime, preferring the host’s version.strictVersion: Throws an error if versions don’t match.requiredVersion: Specifies a semantic version range that must be satisfied.eager: Loads the shared module immediately, which can be useful for critical dependencies or singletons that need to be available early.
Hands-on Example: Demonstrate how to manage versioning of shared libraries/components between remotes and host, ensuring compatibility. Use a strategy (e.g., singleton, eager loading, explicit version control) and show how breaking changes in a shared library can be handled. Include code changes and manifest updates.
We will demonstrate sharing a utility library with singleton: true and strictVersion: true, then introduce a breaking change and observe the failure.
Prerequisites:
- The
dynamic-mf-demoworkspace from the previous example. - Ensure
shell,remote-alpha, andremote-betaare set up.
Step 1: Create a shared utility library.
npx nx g @nx/js:lib shared-utils --directory=libs/shared/utils --unitTestRunner=jest --compiler=tsc --projectNameAndRootFormat=as-provided
Step 2: Add a utility function to libs/shared/utils/src/lib/shared-utils.ts.
// libs/shared/utils/src/lib/shared-utils.ts
export function formatGreeting(name: string): string {
return `Hello, ${name}! Welcome to the federated world. (v1)`;
}
export function getCurrentTime(): string {
return new Date().toLocaleTimeString();
}
And export it in libs/shared/utils/src/index.ts:
// libs/shared/utils/src/index.ts
export * from './lib/shared-utils';
Step 3: Configure shared modules in apps/shell/module-federation.config.ts, apps/remote-alpha/module-federation.config.ts, and apps/remote-beta/module-federation.config.ts.
We’ll share react, react-dom, and our new shared-utils library. For shared-utils, we’ll use singleton: true and strictVersion: true to enforce a single instance and exact version matching.
Update apps/shell/module-federation.config.ts:
// apps/shell/module-federation.config.ts
import { ModuleFederationConfig } from '@nx/webpack/module-federation';
const config: ModuleFederationConfig = {
name: 'shell',
remotes: [], // Still dynamic
shared: {
react: { singleton: true, eager: true, requiredVersion: '^18.0.0' },
'react-dom': { singleton: true, eager: true, requiredVersion: '^18.0.0' },
'@dynamic-mf-demo/shared/utils': {
singleton: true,
strictVersion: true,
requiredVersion: '1.0.0', // Explicitly state version for strict compatibility
},
},
};
export default config;
Update apps/remote-alpha/module-federation.config.ts:
// apps/remote-alpha/module-federation.config.ts
import { ModuleFederationConfig } from '@nx/webpack/module-federation';
const config: ModuleFederationConfig = {
name: 'remote-alpha',
exposes: {
'./Module': './src/app/app.tsx',
},
shared: {
react: { singleton: true, eager: true, requiredVersion: '^18.0.0' },
'react-dom': { singleton: true, eager: true, requiredVersion: '^18.0.0' },
'@dynamic-mf-demo/shared/utils': {
singleton: true,
strictVersion: true,
requiredVersion: '1.0.0',
},
},
};
export default config;
Update apps/remote-beta/module-federation.config.ts:
// apps/remote-beta/module-federation.config.ts
import { ModuleFederationConfig } from '@nx/webpack/module-federation';
const config: ModuleFederationConfig = {
name: 'remote-beta',
exposes: {
'./Module': './src/app/app.tsx',
},
shared: {
react: { singleton: true, eager: true, requiredVersion: '^18.0.0' },
'react-dom': { singleton: true, eager: true, requiredVersion: '^18.0.0' },
'@dynamic-mf-demo/shared/utils': {
singleton: true,
strictVersion: true,
requiredVersion: '1.0.0',
},
},
};
export default config;
Step 4: Use the shared utility in apps/remote-alpha/src/app/app.tsx.
// apps/remote-alpha/src/app/app.tsx
import styles from './app.module.css';
import { formatGreeting, getCurrentTime } from '@dynamic-mf-demo/shared/utils';
import { useState, useEffect } from 'react';
export function App() {
const [time, setTime] = useState(getCurrentTime());
useEffect(() => {
const interval = setInterval(() => {
setTime(getCurrentTime());
}, 1000);
return () => clearInterval(interval);
}, []);
return (
<div className={styles['container']}>
<h2>{formatGreeting('Remote Alpha')}</h2>
<p>This content is dynamically loaded.</p>
<p>Current Time (from shared-utils): {time}</p>
</div>
);
}
export default App;
Step 5: Install dependencies in the workspace root, as shared-utils is now a dependency.
npm install
Step 6: Run the applications and verify shared utility works.
- Terminal 1 (Remote Alpha):
npx nx serve remote-alpha --port=4201 - Terminal 2 (Remote Beta):
npx nx serve remote-beta --port=4202 - Terminal 3 (Shell/Host):
npx nx serve shell --port=4200
Navigate to http://localhost:4200 and load “remote-alpha”. You should see the greeting and the updating time, demonstrating that shared-utils is being correctly consumed and shared.
Step 7: Introduce a breaking change in shared-utils (simulating v2.0.0) and observe the strictVersion failure.
Modify libs/shared/utils/src/lib/shared-utils.ts with a breaking change, and also update its package.json to version 2.0.0.
// libs/shared/utils/src/lib/shared-utils.ts
// This is now v2.0.0 - a breaking change: the function signature changed.
export function formatGreeting(firstName: string, lastName: string): string {
return `Greetings, ${firstName} ${lastName}! You are in the federated v2 world.`;
}
export function getCurrentTime(): string {
return new Date().toLocaleTimeString();
}
Now, update libs/shared/utils/package.json:
// libs/shared/utils/package.json
{
"name": "@dynamic-mf-demo/shared/utils",
"version": "2.0.0", // <-- Updated version
"main": "./src/index.js",
"types": "./src/index.d.ts",
"dependencies": {},
"private": true
}
And update the usage in apps/remote-alpha/src/app/app.tsx to match the new signature (this would normally be a type error you’d fix, but here we fix it to show the strictVersion issue):
// apps/remote-alpha/src/app/app.tsx
// ...
export function App() {
// ...
return (
<div className={styles['container']}>
{/* Update to match the new signature */}
<h2>{formatGreeting('Remote', 'Alpha')}</h2>
{/* ... */}
</div>
);
}
// ...
Step 8: Rebuild and restart only remote-alpha.
# First, rebuild the shared-utils library for the new version to take effect
npx nx build shared-utils
# Then rebuild/serve remote-alpha
npx nx serve remote-alpha --port=4201 --build-libs-from-source=false # Ensure it picks up the new shared-utils build
Keep shell running with its original module-federation.config.ts (expecting shared-utils@1.0.0).
Expected Output (Failure):
When you reload http://localhost:4200 and try to load remote-alpha, you will likely encounter a runtime error in the browser console. Webpack Module Federation, with strictVersion: true, will detect that the shell host expects shared-utils@1.0.0 but remote-alpha (after its update) is attempting to load or is built against shared-utils@2.0.0. This will lead to a Module Federation error indicating a version mismatch for @dynamic-mf-demo/shared/utils.
The error message might look something like this in the browser console (exact message varies by Webpack/Module Federation version):
Uncaught Error: Shared module "@dynamic-mf-demo/shared/utils" could not be loaded because of a version mismatch.
Host requires 1.0.0, but remote-alpha exposes 2.0.0.
Resolution (Conceptual):
To resolve this, you would either:
- Update the host’s
module-federation.config.tsto also expect^2.0.0for@dynamic-mf-demo/shared/utilsand rebuild the host (if a breaking change requires it). - Create an adapter layer in
shared-utilsto handlev1andv2clients, or provide a facade. - Rollback
remote-alphato useshared-utils@1.0.0ifshellcannot be updated immediately.
This example clearly shows how strictVersion: true acts as a guardrail, preventing unintended runtime issues by explicitly failing when version mismatches occur, which is crucial for maintaining stability in independently deployed micro-frontends.
Overcoming Common MFE Challenges
Building robust micro-frontends involves more than just loading modules. It requires careful consideration of how these independent parts interact and coexist.
Strategies for shared state management across micro-frontends.
When micro-frontends need to share state, direct coupling should be avoided. Instead, consider these patterns:
Event Bus / Pub-Sub Pattern:
- Mechanism: MFEs dispatch and subscribe to global custom events. A simple event bus (e.g.,
mitt,event-emitter, or even nativeCustomEventAPI) acts as a central communication channel. - Pros: Loose coupling, easy to implement, suitable for ephemeral or notification-based state changes.
- Cons: Can be hard to track state flow, potential for “event spaghetti” in complex scenarios, no inherent state persistence.
- Nx Relevance: Define the event bus as a shared library and expose it.
- Example: One MFE dispatches a
userLoggedInevent, and another subscribes to update its UI.
- Mechanism: MFEs dispatch and subscribe to global custom events. A simple event bus (e.g.,
Global State Library / Micro-Frontend Orchestrator:
- Mechanism: A dedicated, lightweight state management library (e.g., Zustand, Jotai, Recoil, or even a custom singleton store) is explicitly shared across all MFEs. This store holds the truly global state.
- Pros: Centralized and predictable state, often provides reactivity.
- Cons: Can become a bottleneck if too much state is shared, requires careful API design to avoid tight coupling.
- Nx Relevance: Publish this shared state library as a buildable/publishable Nx library, and configure it with
singleton: truein Module Federation settings. Ensure it has a well-defined API.
URL Parameters & Browser History:
- Mechanism: Critical shared state (e.g., product ID, current tab) is encoded directly in the URL. MFEs read from and write to the URL.
- Pros: Simple, naturally supports deep linking, shareable URLs, handles browser refresh and back/forward navigation.
- Cons: Only works for string-serializable state, not suitable for complex objects or large data.
- Nx Relevance: Integrate with a shared routing library that provides utilities for URL manipulation.
Web Workers & Shared Workers:
- Mechanism: A dedicated Web Worker or Shared Worker can host a state management store or even a small API proxy, keeping the state logic entirely separate from the UI threads of the MFEs.
- Pros: Completely isolated state logic, can perform heavy computations off the main thread, Shared Workers allow communication between multiple browser tabs.
- Cons: More complex setup, IPC (Inter-Process Communication) overhead.
- Nx Relevance: Create a dedicated Nx library for the worker and its communication API.
Local Storage / Session Storage / IndexedDB:
- Mechanism: Persist state in browser storage. MFEs read and write to this storage.
- Pros: Simple for persistent state, accessible across MFEs and browser sessions/tabs.
- Cons: Only primitive data types (string for LS/SS), asynchronous nature for IndexedDB, can lead to stale state if not properly observed.
- Nx Relevance: Use a shared utility library that provides a consistent API for interacting with browser storage.
Recommendation: Favor explicit, minimal sharing. Start with an event bus for notifications, graduate to a shared global state library for truly critical shared application state, and leverage URL parameters for navigation-related state.
Global styling conflicts and resolution.
Micro-frontends, when integrated, can lead to CSS conflicts if not managed carefully.
- CSS-in-JS (e.g., Styled Components, Emotion):
- Mechanism: Styles are scoped to components by default, generating unique class names.
- Pros: Strong isolation, no global conflicts.
- Cons: Runtime overhead, learning curve.
- CSS Modules:
- Mechanism: Unique class names are generated at build time by Webpack for local CSS files.
- Pros: Build-time scoping, good performance.
- Cons: Requires specific tooling setup.
- Nx Relevance: Nx’s React and Angular plugins often support CSS Modules out-of-the-box.
- Shadow DOM (Web Components):
- Mechanism: Web Components, with their encapsulated Shadow DOM, provide truly isolated styles that do not leak in or out.
- Pros: Strongest encapsulation, native browser feature.
- Cons: Can be complex to work with, interoperability challenges with existing frameworks.
- CSS Variables / Design Tokens:
- Mechanism: Define global theme variables (colors, fonts, spacing) at the host level, which remotes consume. Remotes then use these variables in their scoped styles.
- Pros: Centralized theming, consistent look and feel without direct style sharing.
- Cons: Still requires care in remote styling to respect variables.
- Nx Relevance: Create a shared design system library that exposes CSS variables or design tokens.
- Utility-First CSS (e.g., Tailwind CSS):
- Mechanism: Utility classes are highly granular and designed to be used directly in markup. Conflicts are less common as styles are atomic. JIT mode helps keep bundles small.
- Pros: Fast development, consistent design, smaller CSS output if purged.
- Cons: Can lead to verbose HTML, opinionated approach.
- Nx Relevance: Configure TailwindCSS as a shared dependency and ensure consistency across remotes.
- Prefixing / Namespacing:
- Mechanism: Manually prefix all CSS classes and IDs within a micro-frontend (e.g.,
mfe-remote-alpha-button). - Pros: Simple, works with any CSS preprocessor.
- Cons: Manual effort, prone to human error, not suitable for large teams.
- Mechanism: Manually prefix all CSS classes and IDs within a micro-frontend (e.g.,
Recommendation: For new MFEs, a CSS-in-JS solution or CSS Modules offer excellent scoping. For existing codebases, design tokens via CSS variables provide a good balance for consistent theming.
Lazy loading, error boundaries, and robust fault tolerance.
These are critical for creating resilient MFE applications.
Lazy Loading:
- Mechanism: Module Federation inherently supports lazy loading. Remotes are not downloaded until they are needed (e.g., when a user navigates to a route that requires a specific remote). In React,
React.lazy()andSuspenseare used; in Angular,loadChildrenwith dynamic imports. - Benefits: Faster initial load times for the host, reduced bundle size for initial download.
- Nx Relevance: Nx’s Module Federation generators configure lazy loading by default for route-based remotes.
- Mechanism: Module Federation inherently supports lazy loading. Remotes are not downloaded until they are needed (e.g., when a user navigates to a route that requires a specific remote). In React,
Error Boundaries:
- Mechanism: React’s Error Boundaries (or similar concepts in other frameworks) are components that catch JavaScript errors anywhere in their child component tree, log them, and display a fallback UI instead of crashing the entire application.
- Benefits: Prevents a failure in one micro-frontend from taking down the entire host application.
- Nx Relevance: Implement robust error boundaries in your host application around each dynamically loaded remote, and potentially within remotes for internal component failures.
- Hands-on Tip: Wrap your
Suspensecomponents with Error Boundaries.
// Example in React Host import { ErrorBoundary } from './error-boundary'; // Your custom ErrorBoundary component // ... inside your App component render method <ErrorBoundary fallback={<p>Failed to load Remote Alpha!</p>}> <Suspense fallback={<div>Loading Remote Alpha...</div>}> {activeRemote === 'remote-alpha' && loadedRemotes['remote-alpha'] && ( <loadedRemotes['remote-alpha'] /> )} </Suspense> </ErrorBoundary>Robust Fault Tolerance:
- Network Fallbacks: When a remote fails to load (e.g., 404, network error), display a user-friendly message. Implement retry mechanisms or gracefully degrade functionality.
- Timeout Mechanisms: Use timeouts for remote loading to prevent indefinite loading states.
- Monitoring & Alerting: Integrate with APM (Application Performance Monitoring) tools to track MFE errors and performance issues. Nx Cloud’s build metrics can also help identify build-time issues.
- Version Pinning & Rollbacks: As discussed in the next section, having a clear versioning strategy and the ability to roll back individual remotes (or the host) is crucial.
- Resilience through API Gateways/BFFs: For backend microservices powering MFEs, use API Gateways or Backend-for-Frontends (BFFs) to abstract services, handle retries, circuit breaking, and aggregate data, improving the MFE’s resilience.
By combining these strategies, you can build a micro-frontend architecture that is not only flexible and scalable but also highly resilient to failures.
4. Complex Monorepo Refactoring: Migrating & Decomposing
Refactoring within an Nx monorepo involves more than just moving files; it’s about strategically evolving your codebase to maintain its health, performance, and scalability. This section covers two critical scenarios: bringing external legacy projects into Nx and breaking down monolithic applications already within an Nx workspace.
Strategies for Migrating a Legacy Project into Nx
Migrating a legacy project into an Nx monorepo can seem daunting, but a phased, incremental approach can minimize risk and disruption. The goal is to gradually integrate the legacy codebase while leveraging Nx’s benefits.
Step-by-step guide: Identifying modules, creating Nx libs, setting up path aliases, incremental migration.
Phase 1: Preparation and Planning
- Analyze the Legacy Project:
- Identify Boundaries: Look for natural boundaries: distinct features, shared utilities, data access layers, UI components. These will become your Nx libraries.
- Dependency Mapping: Understand internal and external dependencies. Use tools like
madge,dependency-cruiser, or even manual analysis to map out the call graph. - Technology Stack: Note the frameworks, languages, and build tools used. This will inform which Nx plugins you’ll need.
- Test Coverage: Assess existing test coverage. It’s crucial for confidence during migration.
- Setup the Target Nx Workspace:
- Create a new Nx workspace if you don’t have one, or choose an existing one.
- Install necessary Nx plugins (e.g.,
@nx/react,@nx/node,@nx/next,@nx/express) that match your legacy project’s stack. - Configure ESLint and Prettier for the new workspace.
- Define Migration Strategy:
- Big Bang vs. Incremental: Almost always choose incremental.
- Migration Order: Start with foundational, stable parts (e.g., design system, utility libraries) that have few internal dependencies. Then move to data access, then feature slices, and finally the main application.
- Coexistence: Plan how the legacy project will coexist with the Nx workspace during the migration. You might run both build systems in parallel initially.
Phase 2: Initial Integration (Wrapping the Legacy Project)
- Import Legacy Code: Copy the entire legacy project into a new folder within your Nx workspace, typically under
apps/or a dedicatedlegacy/directory.# Assuming your Nx workspace is at `my-nx-monorepo` # And your legacy project is `../legacy-project` mkdir apps/legacy-app cp -R ../legacy-project/* apps/legacy-app/ - Create an Nx Application Wrapper: Use an Nx generator to create an application that points to your legacy code’s entry point.
- For a frontend: Create a new React/Angular/Vue application (
nx g @nx/react:app legacy-frontend-wrapper) and adjust itsproject.jsonandindex.html/main.tsto reference the legacy entry point. - For a backend: Create a new Node/Express application (
nx g @nx/node:app legacy-backend-wrapper) and point itsmainfile to the legacy entry point. - The goal here is to make Nx aware of the application, even if it’s still building/running the legacy way internally.
- For a frontend: Create a new React/Angular/Vue application (
- Setup Build/Serve Targets: Configure the
project.jsonfor your wrapper app to use the legacy project’s build commands. This allows Nx to run the legacy build as a “task”.// apps/legacy-app/project.json { "name": "legacy-app", // ... "targets": { "build": { "executor": "nx:run-commands", "options": { "command": "npm run build", // Or your legacy build command "cwd": "apps/legacy-app" } }, "serve": { "executor": "nx:run-commands", "options": { "command": "npm start", // Or your legacy serve command "cwd": "apps/legacy-app" } } // ... add lint, test targets if they can be run in the legacy context } } - Initial Nx Build/Test: Verify that Nx can now build and test the legacy project by executing
nx build legacy-appandnx test legacy-app(if applicable).
Phase 3: Incremental Extraction and Nxification
- Extract Shared Utilities/Components:
- Identify stable, isolated utility functions or UI components.
- Create a new Nx library:
nx g @nx/js:lib shared-ui --directory=libs/shared/ui - Move the code from the legacy project into this new library.
- Update imports in the legacy project to point to the new Nx library. This is where path aliases become critical.
- Configure Path Aliases: Add the library’s path to
tsconfig.base.json(for TypeScript) or modify your build tool’s configuration (e.g., Webpackresolve.alias, JestmoduleNameMapper).// tsconfig.base.json { "compilerOptions": { "paths": { "@my-nx-monorepo/shared/ui": ["libs/shared/ui/src/index.ts"] } } }
- Configure Path Aliases: Add the library’s path to
- Incrementally replace direct imports with path alias imports.
- Repeat this for other isolated parts (e.g., data models, API clients).
- Extract Features into Build/Testable Libraries:
- Identify logical feature modules within the legacy project.
- Create new buildable/testable Nx libraries for these features:
nx g @nx/react:lib feature-x --directory=libs/feature-x --buildable - Move the feature’s code into the new library.
- Update imports in the
legacy-appto use the new Nx feature library’s path alias. - Configure
project.jsonfor the feature library with proper build/test targets. - Modify the
legacy-app’sproject.jsonto depend on these new feature libraries.
- Migrate Configuration Files:
- Gradually replace legacy
webpack.config.js,rollup.config.js,jest.config.js,eslint.json, etc., with Nx-managed configurations (project.jsonand root-level config files). - Use Nx generators to streamline this (e.g.,
nx g @nx/react:setup-tailwind my-react-lib).
- Gradually replace legacy
- Enable Nx Caching and Affected Commands: As more code is “Nxified” and organized into libraries, Nx’s caching and
affectedcommands will automatically start providing benefits. - Remove Legacy Build Steps: Once a significant portion is migrated and built by Nx, remove the corresponding legacy build steps from the
legacy-appwrapper’sproject.json.
Phase 4: Completion and Cleanup
- Delete Legacy Wrapper: When the entire project is successfully migrated into Nx applications and libraries, delete the
apps/legacy-appfolder. - Clean Up Workspace: Remove any leftover configuration files, redundant dependencies, and legacy tooling.
- Review and Optimize: Review the project graph (
nx graph), ensure all dependencies are correct, and optimize build times.
Hands-on Example: Take a simplified non-Nx React or Node.js project. Guide the user through the process of integrating it into an existing Nx monorepo as a new app/libs, demonstrating how to isolate dependencies. Include original project structure and transformation steps.
Original Legacy Project Structure (Simplified React App):
Let’s assume a simple React application (legacy-react-app) with a component and a utility function.
/legacy-react-app
├── public/
│ └── index.html
├── src/
│ ├── index.js
│ ├── App.js
│ ├── components/
│ │ └── GreetingDisplay.js
│ └── utils/
│ └── string-utils.js
├── package.json
└── webpack.config.js
legacy-react-app/src/utils/string-utils.js:
// Function to be extracted into a shared library
export const capitalize = (str) => {
if (!str) return '';
return str.charAt(0).toUpperCase() + str.slice(1);
};
legacy-react-app/src/components/GreetingDisplay.js:
// Component to be extracted into a feature library
import React from 'react';
const GreetingDisplay = ({ name }) => {
return <h2>Hello, {name}!</h2>;
};
export default GreetingDisplay;
legacy-react-app/src/App.js:
// Main App component
import React from 'react';
import { capitalize } from './utils/string-utils';
import GreetingDisplay from './components/GreetingDisplay';
function App() {
const userName = 'world';
const capitalizedName = capitalize(userName);
return (
<div>
<h1>Legacy React App</h1>
<GreetingDisplay name={capitalizedName} />
<p>This is a legacy application being migrated.</p>
</div>
);
}
export default App;
Step 1: Create a new Nx Workspace.
npx create-nx-workspace@latest legacy-migration-workspace --preset=react-standalone --no-nxcloud --no-install
cd legacy-migration-workspace
npm install
npx nx add @nx/react
Step 2: Copy the legacy-react-app into the Nx workspace.
Assume you have the legacy-react-app folder one level up.
mkdir apps/legacy-react-app
cp -R ../legacy-react-app/* apps/legacy-react-app/
Step 3: Create an Nx application to wrap the legacy project.
We will create a new React application named legacy-wrapper-app and then point its build configuration to the existing legacy structure.
npx nx g @nx/react:app legacy-wrapper-app --directory=apps/legacy-wrapper-app --bundler=webpack --style=css --projectNameAndRootFormat=as-provided
Now, modify apps/legacy-wrapper-app/project.json to leverage the existing legacy-react-app structure. For a quick initial setup, you might temporarily adjust the sourceRoot or point to the legacy webpack.config.js. However, the long-term goal is to replace legacy-react-app’s build system with Nx’s.
For simplicity and to illustrate the migration, we will remove apps/legacy-wrapper-app’s default generated files and integrate legacy-react-app’s files directly into the legacy-wrapper-app project root, then convert its project.json to match a standard Nx React app, replacing webpack.config.js with Nx’s default Webpack.
First, remove generated files from apps/legacy-wrapper-app:
rm -rf apps/legacy-wrapper-app/src
rm apps/legacy-wrapper-app/webpack.config.js
rm apps/legacy-wrapper-app/postcss.config.js
rm apps/legacy-wrapper-app/index.html # We will use the legacy one
rm apps/legacy-wrapper-app/public/index.html
Then, move the contents of apps/legacy-react-app into apps/legacy-wrapper-app:
mv apps/legacy-react-app/public apps/legacy-wrapper-app/
mv apps/legacy-react-app/src apps/legacy-wrapper-app/
mv apps/legacy-react-app/package.json apps/legacy-wrapper-app/
mv apps/legacy-react-app/webpack.config.js apps/legacy-wrapper-app/ # This will be replaced
# Remove the now empty directory
rmdir apps/legacy-react-app
Now, let’s update apps/legacy-wrapper-app/project.json to resemble a standard Nx React app, effectively “Nxifying” the build process. We’ll leverage Nx’s @nx/webpack:webpack executor.
apps/legacy-wrapper-app/project.json (modified):
{
"name": "legacy-wrapper-app",
"$schema": "../../node_modules/nx/schemas/project-schema.json",
"sourceRoot": "apps/legacy-wrapper-app/src", // Points to the copied legacy src
"projectType": "application",
"targets": {
"build": {
"executor": "@nx/webpack:webpack",
"outputs": ["{options.outputPath}"],
"defaultConfiguration": "production",
"options": {
"compiler": "babel",
"outputPath": "dist/apps/legacy-wrapper-app",
"index": "apps/legacy-wrapper-app/public/index.html", // Use legacy index.html
"baseHref": "/",
"main": "apps/legacy-wrapper-app/src/index.js", // Use legacy entry point
"tsConfig": "apps/legacy-wrapper-app/tsconfig.app.json",
"assets": [
"apps/legacy-wrapper-app/public",
{
"glob": "**/!(*.module.css)",
"input": "apps/legacy-wrapper-app/src",
"output": ["./src", "./"]
}
],
"styles": [], // Or add legacy CSS files
"scripts": [],
"webpackConfig": "apps/legacy-wrapper-app/webpack.config.js" // We will delete this soon
},
"configurations": {
"development": {
"extractLicenses": false,
"optimization": false,
"sourceMap": true,
"vendorChunk": true
},
"production": {
"fileReplacements": [
{
"replace": "apps/legacy-wrapper-app/src/environments/environment.ts",
"with": "apps/legacy-wrapper-app/src/environments/environment.prod.ts"
}
],
"optimization": true,
"outputHashing": "all",
"sourceMap": false,
"namedChunks": false,
"extractLicenses": true,
"vendorChunk": false
}
}
},
"serve": {
"executor": "@nx/react:dev-server",
"defaultConfiguration": "development",
"options": {
"buildTarget": "legacy-wrapper-app:build"
},
"configurations": {
"development": {
"buildTarget": "legacy-wrapper-app:build:development"
},
"production": {
"buildTarget": "legacy-wrapper-app:build:production",
"hmr": false
}
}
},
"lint": {
"executor": "@nx/eslint:lint",
"outputs": ["{options.outputFile}"],
"options": {
"lintFilePatterns": ["apps/legacy-wrapper-app/**/*.{ts,tsx,js,jsx}"]
}
},
"test": {
"executor": "@nx/jest:jest",
"outputs": ["{workspaceRoot}/coverage/{projectRoot}"],
"options": {
"jestConfig": "apps/legacy-wrapper-app/jest.config.ts",
"passWithNoTests": true
},
"configurations": {
"ci": {
"ci": true,
"codeCoverage": true
}
}
}
},
"tags": []
}
You’ll also need a basic apps/legacy-wrapper-app/tsconfig.app.json:
// apps/legacy-wrapper-app/tsconfig.app.json
{
"extends": "../../tsconfig.base.json",
"compilerOptions": {
"jsx": "react-jsx",
"allowJs": true,
"esModuleInterop": true,
"allowSyntheticDefaultImports": true,
"types": ["node", "jest"]
},
"files": [],
"include": [
"src/**/*.ts",
"src/**/*.tsx",
"src/**/*.js",
"src/**/*.jsx"
],
"exclude": ["jest.config.ts"]
}
And update apps/legacy-wrapper-app/src/index.js to correctly import App (legacy might use require or different import styles):
// apps/legacy-wrapper-app/src/index.js
import React from 'react';
import ReactDOM from 'react-dom/client';
import App from './App'; // Ensure this matches your App.js export
const root = ReactDOM.createRoot(document.getElementById('root'));
root.render(
<React.StrictMode>
<App />
</React.StrictMode>
);
Finally, npm install again to pick up any new dependencies (e.g., react-dom/client).
Now, try to serve the wrapper app: npx nx serve legacy-wrapper-app. It should run the legacy React app within the Nx context.
Step 4: Extract string-utils into a shared Nx library.
- Generate Nx Library:
npx nx g @nx/js:lib shared-string-utils --directory=libs/shared/string-utils --compiler=tsc --projectNameAndRootFormat=as-provided - Move Code: Copy
apps/legacy-wrapper-app/src/utils/string-utils.jstolibs/shared/string-utils/src/lib/string-utils.ts(renaming to.tsand adding types is a good practice).// libs/shared/string-utils/src/lib/string-utils.ts export const capitalize = (str: string): string => { if (!str) return ''; return str.charAt(0).toUpperCase() + str.slice(1); }; - Update
libs/shared/string-utils/src/index.ts:// libs/shared/string-utils/src/index.ts export * from './lib/string-utils'; - Configure Path Alias: Nx usually adds this automatically when generating libraries, but double-check
tsconfig.base.json:// tsconfig.base.json { "compilerOptions": { "paths": { "@legacy-migration-workspace/shared/string-utils": ["libs/shared/string-utils/src/index.ts"] } } } - Update Imports in
apps/legacy-wrapper-app/src/App.js:// apps/legacy-wrapper-app/src/App.js import React from 'react'; // Update this import to use the Nx library path alias import { capitalize } from '@legacy-migration-workspace/shared/string-utils'; import GreetingDisplay from './components/GreetingDisplay'; function App() { const userName = 'world'; const capitalizedName = capitalize(userName); return ( <div> <h1>Legacy React App</h1> <GreetingDisplay name={capitalizedName} /> <p>This is a legacy application being migrated.</p> </div> ); } export default App; - Run
npm installin the root to ensure new library dependencies are recognized. - Verify: Run
npx nx serve legacy-wrapper-app. The app should still function correctly, now importingcapitalizefrom the new Nx library.
Step 5: Extract GreetingDisplay into a React UI library.
- Generate Nx React UI Library:
npx nx g @nx/react:lib ui-greeting --directory=libs/ui/greeting --compiler=babel --projectNameAndRootFormat=as-provided - Move Code: Copy
apps/legacy-wrapper-app/src/components/GreetingDisplay.jstolibs/ui/greeting/src/lib/greeting-display.tsx(renaming and adding types).// libs/ui/greeting/src/lib/greeting-display.tsx import React from 'react'; /* eslint-disable-next-line */ export interface GreetingDisplayProps { name: string; } export function GreetingDisplay({ name }: GreetingDisplayProps) { return <h2>Hello, {name}!</h2>; } export default GreetingDisplay; - Update
libs/ui/greeting/src/index.ts:// libs/ui/greeting/src/index.ts export * from './lib/greeting-display'; - Update Imports in
apps/legacy-wrapper-app/src/App.js:// apps/legacy-wrapper-app/src/App.js import React from 'react'; import { capitalize } from '@legacy-migration-workspace/shared/string-utils'; // Update this import to use the Nx UI library path alias import { GreetingDisplay } from '@legacy-migration-workspace/ui/greeting'; function App() { const userName = 'world'; const capitalizedName = capitalize(userName); return ( <div> <h1>Legacy React App</h1> <GreetingDisplay name={capitalizedName} /> <p>This is a legacy application being migrated.</p> </div> ); } export default App; - Run
npm installin the root. - Verify: Run
npx nx serve legacy-wrapper-app. The application should still render correctly.
By following these steps, you’ve incrementally migrated parts of a legacy React application into well-defined Nx libraries, leveraging path aliases to manage dependencies within the monorepo. The legacy-wrapper-app now depends on these new Nx libraries, and its original src folder is much leaner. This process can be repeated until the entire legacy project is decomposed into Nx applications and libraries, enabling Nx’s caching, affected command, and build optimization benefits.
Breaking Down Monoliths within Nx
Even within an Nx monorepo, applications can grow into monoliths if features are not properly encapsulated. Breaking down these “Nx monoliths” into smaller, more manageable libraries and micro-frontends is crucial for continued scalability, team autonomy, and maintainability.
Identifying “vertical slices” or bounded contexts.
- Vertical Slices: Think of features that cut across UI, API, and data layers, forming a complete, independently deployable or runnable unit. Examples:
Order Management,User Profiles,Product Catalog. - Bounded Contexts (Domain-Driven Design): Identify conceptual boundaries where specific domain models and business rules apply. Each bounded context should be responsible for its own data and logic. This aligns well with microservices and micro-frontends.
Indicators of an “Nx Monolith”:
- A single large application project (
apps/my-big-app) with deeply nested, unshared components and services. - Interdependencies between seemingly unrelated features within that single app’s
srcfolder. - Long build times for the single application, even for small changes.
- Difficulty for multiple teams to work on the app concurrently without merge conflicts.
- UI components or business logic that could be reused but are duplicated or tightly coupled within the main app.
Strategies for extracting features into isolated libraries or micro-frontends.
Feature Libraries (Internal to Monorepo):
- Purpose: Encapsulate a complete feature or a reusable piece of business logic/UI. They are consumed by applications within the same monorepo.
- Types:
data-accesslibs: Handle API calls, data fetching, state management related to a specific domain.uilibs: Contain presentational components.featurelibs: Orchestratedata-accessanduilibs to deliver a specific user story.utillibs: Pure utility functions.
- Strategy: Start by extracting
data-accessanduicomponents that are stable and widely used. Then move tofeaturelibraries, which compose these lower-level libs. - Access Control (Nx
enforceModuleBoundaries): Usenx.jsontags to prevent unintended imports (e.g.,type:uishould not importtype:api).
Buildable/Publishable Libraries:
- Purpose: These are libraries that can be independently built and published to an NPM registry (internal or public). Useful for sharing code across multiple monorepos or traditional projects, or for very stable foundational components.
- Strategy: Identify highly stable, widely used, and rarely changing libraries (e.g., a core design system, a global authentication client). Mark them as
buildableinproject.jsonand configurenx releasefor them.
Micro-Frontends (Remotes):
- Purpose: Truly independently deployable and runnable applications that are composed at runtime (using Module Federation). Ideal for distinct teams owning distinct parts of a user experience.
- Strategy: Identify large, complex vertical slices that are owned by dedicated teams and have distinct deployment cycles. Convert these into Nx Module Federation remotes. This is the most significant step in decomposition.
Hands-on Example: From a monolithic Nx application (e.g., a large React app), guide the user to extract a “feature” section into a separate buildable/publishable library or even a new micro-frontend. Show how to update imports and project.json files.
Let’s assume we have a large Nx React application called admin-dashboard which has grown to include user management features directly within its src folder. We will extract this user-management feature into its own Nx feature library.
Prerequisites:
- An Nx Workspace.
- A React application named
admin-dashboardwithinapps/admin-dashboard. - Simulate a monolithic structure where user management components are directly in
apps/admin-dashboard/src/app/user-management.
Step 1: Create a mock admin-dashboard app with an embedded feature.
If you don’t have one, create it:
npx create-nx-workspace@latest monolith-to-libs --preset=react-standalone --no-nxcloud --no-install
cd monolith-to-libs
npm install
npx nx add @nx/react
npx nx g @nx/react:app admin-dashboard --directory=apps/admin-dashboard --bundler=webpack --style=css --projectNameAndRootFormat=as-provided
Now, simulate the “monolithic” user management feature directly within apps/admin-dashboard/src/app/.
apps/admin-dashboard/src/app/user-management/user-list.tsx:
// apps/admin-dashboard/src/app/user-management/user-list.tsx
import React, { useState, useEffect } from 'react';
interface User {
id: string;
name: string;
email: string;
}
// Mock API call
const fetchUsers = async (): Promise<User[]> => {
return new Promise(resolve => {
setTimeout(() => {
resolve([
{ id: '1', name: 'Alice', email: 'alice@example.com' },
{ id: '2', name: 'Bob', email: 'bob@example.com' },
]);
}, 500);
});
};
export function UserList() {
const [users, setUsers] = useState<User[]>([]);
const [loading, setLoading] = useState(true);
useEffect(() => {
fetchUsers().then(data => {
setUsers(data);
setLoading(false);
});
}, []);
if (loading) {
return <p>Loading users...</p>;
}
return (
<div>
<h3>User List (Embedded in Dashboard)</h3>
<ul>
{users.map(user => (
<li key={user.id}>
{user.name} ({user.email})
</li>
))}
</ul>
</div>
);
}
export default UserList;
Modify apps/admin-dashboard/src/app/app.tsx to use this embedded component:
// apps/admin-dashboard/src/app/app.tsx
import styles from './app.module.css';
import NxWelcome from './nx-welcome';
import { UserList } from './user-management/user-list'; // Import embedded feature
export function App() {
return (
<>
<NxWelcome title="admin-dashboard" />
<div className={styles['container']}>
<h2>Admin Dashboard Main Content</h2>
{/* Render the embedded user management feature */}
<UserList />
</div>
</>
);
}
export default App;
Run npx nx serve admin-dashboard to verify the setup. You should see “User List (Embedded in Dashboard)” on the page.
Step 2: Extract user-management into a new Nx React feature library.
- Generate a new React library for the feature:
npx nx g @nx/react:lib feat-user-management --directory=libs/admin/feat-user-management --bundler=webpack --style=css --projectNameAndRootFormat=as-provided - Move the
user-list.tsxcomponent and related files:- Copy
apps/admin-dashboard/src/app/user-management/user-list.tsxtolibs/admin/feat-user-management/src/lib/user-list.tsx. - Delete the original folder:
rm -rf apps/admin-dashboard/src/app/user-management/
- Copy
- Update the new library’s entry point (
libs/admin/feat-user-management/src/index.ts):// libs/admin/feat-user-management/src/index.ts export * from './lib/user-list'; // Expose the UserList component - Update
apps/admin-dashboard/src/app/app.tsxto import from the new library:// apps/admin-dashboard/src/app/app.tsx import styles from './app.module.css'; import NxWelcome from './nx-welcome'; // Import from the new feature library import { UserList } from '@monolith-to-libs/admin/feat-user-management'; export function App() { return ( <> <NxWelcome title="admin-dashboard" /> <div className={styles['container']}> <h2>Admin Dashboard Main Content</h2> <UserList /> </div> </> ); } export default App; - Run
npm installin the workspace root to ensure all dependency graphs are updated. - Verify: Run
npx nx serve admin-dashboard. The application should still function correctly, but now theUserListcomponent is imported from the dedicatedfeat-user-managementlibrary.
Step 3: Make the feat-user-management library buildable (and conceptually publishable).
By default, Nx libraries are not buildable. To make a library buildable (meaning it can be compiled independently and its artifacts published), you need to specify buildable: true and often choose a bundler.
Update
libs/admin/feat-user-management/project.json: Modify theproject.jsonto include abuildtarget if it doesn’t already, and ensure theexecutoris suitable for a buildable library (e.g.,@nx/js:tscfor TypeScript libraries, or@nx/webpack:webpackfor more complex bundles). For a React component library,@nx/react:librarywould typically configure this during generation. If not, add it:// libs/admin/feat-user-management/project.json { "name": "admin-feat-user-management", "$schema": "../../../node_modules/nx/schemas/project-schema.json", "sourceRoot": "libs/admin/feat-user-management/src", "projectType": "library", "targets": { "build": { "executor": "@nx/webpack:webpack", // Or @nx/js:tsc "outputs": ["{options.outputPath}"], "options": { "outputPath": "dist/libs/admin/feat-user-management", "tsConfig": "libs/admin/feat-user-management/tsconfig.lib.json", "main": "libs/admin/feat-user-management/src/index.ts", "webpackConfig": "libs/admin/feat-user-management/webpack.config.js", // If you have one "compiler": "babel", // For React "assets": [ { "glob": "libs/admin/feat-user-management/README.md", "input": ".", "output": "." } ] } }, "lint": { "executor": "@nx/eslint:lint", "outputs": ["{options.outputFile}"], "options": { "lintFilePatterns": ["libs/admin/feat-user-management/**/*.{ts,tsx,js,jsx}"] } }, "test": { "executor": "@nx/jest:jest", "outputs": ["{workspaceRoot}/coverage/{projectRoot}"], "options": { "jestConfig": "libs/admin/feat-user-management/jest.config.ts", "passWithNoTests": true } } }, "tags": ["scope:admin", "type:feature"] // Add meaningful tags }Note: The
@nx/react:libgenerator typically sets up these build configurations automatically if you pass--buildableor--publishableduring generation.Add
package.jsonto the buildable library (if it doesn’t have one): For a publishable library, it’s essential to have apackage.jsonthat defines its metadata and entry points.// libs/admin/feat-user-management/package.json { "name": "@monolith-to-libs/admin/feat-user-management", "version": "0.0.1", "dependencies": { "react": "^18.2.0", "react-dom": "^18.2.0" }, "type": "commonjs", "main": "./src/index.js", "typings": "./src/index.d.ts", "exports": { ".": { "import": "./src/index.mjs", "require": "./src/index.js" } } }Build the library:
npx nx build admin-feat-user-managementThis command will compile the library and place its artifacts in
dist/libs/admin/feat-user-management.
Now, the admin-feat-user-management library is independently buildable. You could now use npm publish (after setting up a registry) from within dist/libs/admin/feat-user-management to publish this library as an independent package.
This hands-on example demonstrates how to decompose a feature from a monolithic Nx application into a dedicated, buildable Nx library, setting the stage for more granular development, testing, and even independent publishing or Module Federation.
5. Security in Monorepos
Securing a monorepo is a multi-faceted challenge, requiring attention to dependency vulnerabilities, secrets management, and access control. Nx provides tools and mechanisms to integrate security best practices directly into your development and CI/CD workflows.
Dependency Vulnerability Scanning
Supply chain attacks, where malicious code is injected into widely used open-source packages, are a significant threat. Proactive dependency scanning is essential to protect your monorepo.
Discussion of tools (e.g., Snyk, npm audit) and how to integrate them into Nx CI/CD.
npm audit/yarn audit/pnpm audit:- What it is: Built-in commands for npm, Yarn, and pnpm that scan your project’s dependencies for known vulnerabilities by consulting public vulnerability databases.
- Pros: Easy to use, no external services needed, can automatically fix some vulnerabilities (
npm audit fix). - Cons: Only covers direct and transitive dependencies (not code logic), can produce many false positives/negatives, less feature-rich than dedicated tools.
- Nx Integration: Run
npm auditfor affected projects in CI. Usenpx nx affected --target=audit(if you define an audit target).
Snyk:
- What it is: A comprehensive security platform that scans dependencies, code, containers, and infrastructure as code for vulnerabilities. Integrates with Git repositories, IDEs, and CI/CD pipelines.
- Pros: More accurate and detailed vulnerability reports, suggests remediation steps, monitors for new vulnerabilities, supports various languages/ecosystems, can enforce policies.
- Cons: Commercial product (though a free tier is available), adds an external dependency.
- Nx Integration: Integrate Snyk as a CI step using its CLI. You can focus scanning on affected projects using
npx nx affected --json | jq '.projects'to get a list of changed projects.
OWASP Dependency-Check:
- What it is: An open-source tool that attempts to detect publicly disclosed vulnerabilities contained within a project’s dependencies.
- Pros: Open source, flexible, supports many languages.
- Cons: Requires Java runtime, can be more complex to configure than
npm audit. - Nx Integration: Similar to Snyk, integrate its CLI into CI.
Hands-on Example: Configure a CI step to run npm audit or an equivalent tool for affected projects, demonstrating how to enforce vulnerability checks. Show how to handle different severities.
We’ll extend the ci.yml from the Self-Healing CI section to include an npm audit step that only runs for affected projects.
Prerequisites:
- An Nx Workspace with a
ci.yml(from Section 2.2). - An
npm auditvulnerability (you can introduce a package with a known vulnerability for testing, e.g., an old version oflodash).
Step 1: Introduce an intentional vulnerability for testing.
Add an old version of a package with known vulnerabilities to a project (e.g., apps/my-app/package.json).
// apps/my-app/package.json
{
"name": "my-app",
"version": "0.0.1",
"private": true,
"dependencies": {
"react": "18.2.0",
"react-dom": "18.2.0",
"@nx/react": "latest",
"lodash": "3.10.1" // Intentionally old version with vulnerabilities
},
"devDependencies": {
// ...
}
}
Run npm install in the workspace root to ensure this dependency is installed.
Step 2: Define a custom audit target in affected project’s project.json (optional but recommended for nx affected).
For better integration with Nx’s affected command, you can define an audit target in apps/my-app/project.json.
// apps/my-app/project.json
{
"name": "my-app",
// ...
"targets": {
// ... existing targets
"audit": {
"executor": "nx:run-commands",
"options": {
"command": "npm audit",
"cwd": "{projectRoot}",
"args": "--json --audit-level=moderate" // Customize audit level
}
}
}
}
Now nx affected --target=audit would work. For a simple npm audit, you don’t strictly need a custom target, you can just run npm audit directly in the CI. We’ll use the latter for simplicity in the CI workflow.
Step 3: Update apps/my-app/src/app/app.tsx to make my-app affected.
Make a small, harmless change to apps/my-app/src/app/app.tsx so that my-app is considered “affected” when we push our changes.
// apps/my-app/src/app/app.tsx
// ...
export function App() {
const greeting = 'Hello Nx Expert!'; // This line already exists, just adding a comment to make it affected
// Added comment to make app affected
// ...
}
// ...
Step 4: Update the GitHub Actions workflow (.github/workflows/ci.yml) to include vulnerability scanning.
We’ll add a step to run npm audit for all affected projects. For demonstration, we’ll allow npm audit to fail for now (remove continue-on-error: true later if you want to enforce strict failure).
# .github/workflows/ci.yml
name: CI
on:
push:
branches:
- main
- master
pull_request:
types: [opened, synchronize, reopened, ready_for_review]
permissions:
contents: write # Needed for nx fix-ci to push fixes
actions: read # Needed for default permissions
jobs:
main:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
token: ${{ secrets.GITHUB_TOKEN }}
fetch-depth: 0 # Needed for nx affected commands to compare history
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Setup Nx Cloud
run: npx nx-cloud start-ci-run
- name: Run affected lint
run: npx nx affected --target=lint --max-parallel=3 --configuration=ci
continue-on-error: true
- name: Run affected tests
run: npx nx affected --target=test --max-parallel=3 --configuration=ci
continue-on-error: true
# New step: Run vulnerability audit for affected projects
- name: Run npm audit for affected projects
id: npm_audit
run: |
AFFECTED_PROJECTS=$(npx nx show projects --affected --type=app,lib --json)
if [ -z "$AFFECTED_PROJECTS" ] || [ "$AFFECTED_PROJECTS" = "[]" ]; then
echo "No affected projects to audit."
else
echo "Running audit for affected projects:"
echo "$AFFECTED_PROJECTS" | jq -r '.[]' | while read PROJECT_NAME; do
PROJECT_ROOT=$(npx nx show project "$PROJECT_NAME" --json | jq -r '.root')
echo "Auditing project: $PROJECT_NAME in $PROJECT_ROOT"
# Running npm audit in the project's root directory
# We specify --audit-level=moderate to fail only for moderate or higher issues
# Add --json to get JSON output, useful for programmatic parsing
# Add --force to exit with non-zero code on audit issues
npm audit --prefix "$PROJECT_ROOT" --audit-level=moderate --json || true # Allow step to succeed for demo
done
fi
# Remove `|| true` to make this step fail the CI if vulnerabilities are found
# continue-on-error: true # Keep for demo, remove to enforce failure
- name: Nx Cloud Self-Healing CI
run: npx nx-cloud fix-ci
if: always()
env:
NX_CLOUD_ACCESS_TOKEN: ${{ secrets.NX_CLOUD_ACCESS_TOKEN }}
Explanation of the npm audit step:
npx nx show projects --affected --type=app,lib --json: This command gets a JSON array of all affected applications and libraries.jq -r '.[]' | while read PROJECT_NAME; do ... done: This parses the JSON array and iterates over each affected project name.npm audit --prefix "$PROJECT_ROOT" --audit-level=moderate --json || true: This runsnpm auditin the context of each affected project.--prefix "$PROJECT_ROOT": Ensuresnpm auditruns against the correctpackage.jsonfor that project.--audit-level=moderate: Configures the minimum vulnerability level to consider a failure (e.g.,info,low,moderate,high,critical).--json: Provides machine-readable JSON output.|| true: This is a trick to make the shell command exit with a0status code even ifnpm auditfinds vulnerabilities. For real enforcement, remove|| trueandcontinue-on-error: truefrom the step.
Step 5: Commit changes and open a Pull Request.
git add .
git commit -m "feat: Add npm audit to CI and introduce vulnerability"
git push origin <your-branch-name>
Create a PR on GitHub.
Expected Outputs:
- GitHub Actions Run: The CI workflow will run.
- The “Run npm audit for affected projects” step will execute.
- Since
my-appis affected and contains an oldlodashversion with vulnerabilities, thenpm auditcommand formy-appwill output vulnerability warnings/errors to the console. - Because of
|| true(orcontinue-on-error: true), the CI step itself will pass, but the console output will clearly show the audit results.
- Enforcing Failure: If you remove
|| trueandcontinue-on-error: truefrom thenpm auditstep, the CI job will fail. This is the desired behavior for enforcing vulnerability checks.
This example demonstrates how to integrate dependency vulnerability scanning into your Nx CI pipeline, focusing only on the projects affected by a change, which significantly speeds up the feedback loop compared to auditing the entire monorepo every time.
Secrets Management
Handling sensitive information like API keys, database credentials, and access tokens requires robust strategies to prevent exposure, especially in a monorepo shared by many developers and automated systems.
Best practices for handling API keys, database credentials, etc., in development and production (e.g., .env.local, cloud secret managers, CI/CD secrets).
- Never Commit Secrets to Git: This is the golden rule. Any sensitive data must be excluded from version control.
- Environment Variables (
.envfiles):- Development: Use
.envfiles (e.g.,.env.development,.env.local) for local development. These files must be git-ignored (.gitignore). Provide a.env.examplefile for new developers to know which variables are needed. - Production/CI: Environment variables should be injected directly into the runtime environment or CI/CD pipeline. Never deploy
.envfiles to production.
- Development: Use
- Cloud Secret Managers:
- Mechanism: Services like AWS Secrets Manager, Google Secret Manager, Azure Key Vault, HashiCorp Vault. These securely store, manage, and retrieve secrets. Applications access them at runtime via SDKs or environment injection.
- Pros: Centralized, highly secure, versioned secrets, audit logs, fine-grained access control (IAM roles), automatic rotation.
- Cons: Adds complexity, costs, vendor lock-in.
- Best Practice: Prefer this for production environments.
- CI/CD Pipeline Secrets:
- Mechanism: CI/CD platforms (GitHub Actions, GitLab CI, Azure Pipelines, Jenkins) provide secure mechanisms to store secrets as environment variables, which are then injected into pipeline runs.
- Pros: Secure for automated builds/deployments, isolated from codebase.
- Cons: Still requires manual management or integration with secret managers.
- Best Practice: Use these for API keys needed during CI (e.g., for deployment to cloud, reporting to external services).
- Dotenv-Expand and Configuration Management:
- Use
dotenvordotenv-clifor Node.js projects to load.envfiles. - For more complex configurations, consider a dedicated configuration management library (e.g.,
config-schema,rc,nconf) that can combine environment variables, command-line arguments, and config files, with clear precedence rules.
- Use
Hands-on Example: Demonstrate a basic approach using environment variables and .env files with an Nx application, explaining the security implications and how to inject secrets in CI for deployment.
We’ll create a simple API key usage in a React application and demonstrate how to manage it with .env locally and GitHub Actions secrets in CI.
Prerequisites:
- An Nx Workspace with a React application (e.g.,
my-appfrom Section 2).
Step 1: Create a .env.local file in apps/my-app and add to .gitignore.
Create
apps/my-app/.env.local:# apps/my-app/.env.local REACT_APP_API_KEY=local_dev_api_key_123Note: For React apps bundled with Webpack, environment variables often need to be prefixed with
REACT_APP_to be exposed to the client-side code.Add
apps/my-app/.env.localto your global.gitignoreor the one inapps/my-app/.# .gitignore # ... # Nx specific .nx/ dist/ tmp/ # Local .env files .env*.local
Step 2: Use the environment variable in apps/my-app/src/app/app.tsx.
// apps/my-app/src/app/app.tsx
import styles from './app.module.css';
import NxWelcome from './nx-welcome';
export function App() {
const apiKey = process.env.REACT_APP_API_KEY; // Access the environment variable
return (
<>
<NxWelcome title="my-app" />
<div className={styles['container']}>
<h2>My App Content</h2>
{apiKey && <p>Local API Key: `{apiKey}`</p>}
{!apiKey && <p>No API Key found. Is it configured?</p>}
</div>
</>
);
}
export default App;
Step 3: Test locally.
npx nx serve my-app
Navigate to http://localhost:4200. You should see “Local API Key: local_dev_api_key_123”. This confirms the .env.local file is being loaded.
Step 4: Configure GitHub Actions to inject a production API key.
GitHub Repository Secrets: In your GitHub repository, go to “Settings” -> “Secrets and variables” -> “Actions” and add a new repository secret named
PROD_API_KEYwith a value likeprod_ci_api_key_ABC.Update CI/CD Workflow (
.github/workflows/ci.yml): We’ll add a conceptual “deploy” job. For client-side React apps, environment variables are typically “baked in” during the build step. For Node.js backends, they are often injected at runtime. Here, we’ll demonstrate injecting it during abuildprocess in CI, assuming a client-side app whereprocess.env.REACT_APP_API_KEYgets replaced at build time.# .github/workflows/ci.yml (partial) name: CI on: push: branches: - main - master jobs: # ... (existing main job with lint, test, audit) deploy: runs-on: ubuntu-latest needs: main # Ensure CI passes before deployment environment: production # Use a GitHub Environment for production deployments steps: - uses: actions/checkout@v4 with: fetch-depth: 0 - uses: actions/setup-node@v4 with: node-version: '20' cache: 'npm' - name: Install dependencies run: npm ci - name: Build my-app for Production run: npx nx build my-app --configuration=production env: # Inject the secret from GitHub Actions into the build environment REACT_APP_API_KEY: ${{ secrets.PROD_API_KEY }} - name: Deploy my-app (Conceptual) # In a real scenario, this would involve commands to deploy to S3, Netlify, Vercel, etc. run: | echo "Simulating deployment of my-app with PROD_API_KEY..." echo "Build artifacts are in: dist/apps/my-app" # Example: Deploy to S3 # aws s3 sync dist/apps/my-app s3://my-prod-bucket --delete # ... additional deployment steps env: # Secrets can be passed again if the deployment script also needs them # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} REACT_APP_API_KEY: ${{ secrets.PROD_API_KEY }} # Passed to show it's available in deploy step
Step 5: Commit changes to main (or a branch that triggers the deploy job).
git add .
git commit -m "feat: Implement secrets management for API key"
git push origin main
Expected CI/CD Outputs:
- When the
deployjob runs, theBuild my-app for Productionstep will execute. - The
REACT_APP_API_KEYenvironment variable will be securely injected into the build process by GitHub Actions. - The resulting production build of
my-app(indist/apps/my-app) will containprod_ci_api_key_ABCbaked into its JavaScript bundle (for client-side React apps). - If you were to inspect the deployed application (conceptually), it would display “Local API Key:
prod_ci_api_key_ABC”.
Security Implications:
local_dev_api_key_123is only on developers’ local machines and never committed.prod_ci_api_key_ABCis only in GitHub Secrets and injected at build/deploy time, never residing in the codebase.- Using GitHub Environments with required reviewers or other protections can add an extra layer of security for production deployments.
This demonstrates a fundamental and secure approach to managing secrets, crucial for any enterprise-grade monorepo.
Access Control and Permissions (Monorepo context)
In large monorepos with multiple teams, controlling who can do what to which parts of the codebase is paramount. Nx, in conjunction with Git features and CI/CD tools, provides robust mechanisms for access control.
Discuss how Nx’s module boundaries, CODEOWNERS files, and CI/CD pipeline permissions contribute to access control and security in large teams.
Nx Module Boundaries (
nx.jsonenforceModuleBoundaries):- Mechanism: Nx allows you to define strict rules about how libraries can import each other based on their
tagsinproject.json. These rules are enforced by the Nx Linter. - Contribution to Security:
- Preventing Accidental Access: You can define a tag like
scope:sensitive-datafor libraries handling PII or security-critical logic. Then, you can create a boundary rule that preventstype:uiprojects (user-facing) from importingscope:sensitive-datadirectly. This prevents frontends from accidentally exposing or processing sensitive data inappropriately. - Architectural Enforcement: Ensures architectural integrity, preventing developers from bypassing intended layers or introducing unwanted dependencies.
- Preventing Accidental Access: You can define a tag like
- Example Rule:
{"sourceTag": "type:ui", "onlyDependOnLibsWithTags": ["type:ui", "type:shared"]}would prevent UI libs from importing fromtype:data-accessortype:api.
- Mechanism: Nx allows you to define strict rules about how libraries can import each other based on their
CODEOWNERS Files (Git platforms like GitHub, GitLab):
- Mechanism: A
CODEOWNERSfile (typically in.github/CODEOWNERSor.gitlab/CODEOWNERS) specifies which teams or individuals are responsible for code in specific paths or directories. - Contribution to Security:
- Mandatory Reviewers: Git platforms can be configured to require approval from code owners for any PRs affecting their designated code paths. This is vital for security, as it ensures that changes to critical infrastructure, sensitive libraries, or deployment scripts are reviewed by the responsible experts.
- Auditing and Accountability: Clearly defines ownership and accountability for changes.
- Nx Relevance: With a well-structured Nx monorepo (e.g.,
libs/billing/api,libs/auth/data-access),CODEOWNERSfiles can be very granular, assigning specific teams to specific Nx libraries or applications.
- Mechanism: A
CI/CD Pipeline Permissions:
- Mechanism: CI/CD platforms allow you to define fine-grained permissions for pipeline jobs, including which secrets they can access, which cloud roles they can assume, and what actions they can perform (e.g., deploying to production, merging to
main). - Contribution to Security:
- Least Privilege: Ensure CI jobs only have the minimum permissions necessary to perform their tasks. A build job doesn’t need production deployment credentials.
- Separation of Concerns: Separate jobs for different stages (build, test, deploy) and assign distinct permissions to each.
- Protected Branches & Environments: Use protected branches (e.g.,
mainbranch only allowing merges via PRs and requiring code owner reviews) and protected environments (e.g., GitHub Environments) to gate critical operations. - Credential Rotation: Integrate with secret managers to rotate credentials periodically.
- Mechanism: CI/CD platforms allow you to define fine-grained permissions for pipeline jobs, including which secrets they can access, which cloud roles they can assume, and what actions they can perform (e.g., deploying to production, merging to
Combining these three layers provides a robust security posture in a monorepo, enforcing architectural constraints, requiring expert review for critical changes, and limiting the blast radius of compromised credentials in CI/CD.
Hands-on Example: Update nx.json with more granular module boundary rules, specifically preventing sensitive data access libraries from being imported into public-facing UI, and demonstrate the lint failure.
We will create a sensitive data library and a public UI library, then enforce a boundary rule to prevent the UI from importing from the sensitive library.
Prerequisites:
- An Nx Workspace (e.g.,
monolith-to-libsfrom the previous section). @nx/eslintplugin installed (npm install -D @nx/eslint).
Step 1: Create a data-access library for sensitive user data.
npx nx g @nx/js:lib data-access-sensitive-user --directory=libs/data-access/sensitive-user --compiler=tsc --projectNameAndRootFormat=as-provided
Add a tag to its project.json indicating its sensitive nature:
// libs/data-access/sensitive-user/project.json
{
"name": "data-access-sensitive-user",
"$schema": "../../../node_modules/nx/schemas/project-schema.json",
"sourceRoot": "libs/data-access/sensitive-user/src",
"projectType": "library",
"targets": {
// ...
},
"tags": ["scope:data-access", "scope:sensitive"] // <--- Add this tag
}
Add some mock sensitive data function to libs/data-access/sensitive-user/src/lib/data-access-sensitive-user.ts:
// libs/data-access/sensitive-user/src/lib/data-access-sensitive-user.ts
export function getSensitiveUserDetails(userId: string): { userId: string; ssn: string; creditCardLast4: string } {
// In a real app, this would fetch from a secure backend
return {
userId: userId,
ssn: '***-**-1234',
creditCardLast4: '4321',
};
}
And export it in libs/data-access/sensitive-user/src/index.ts.
Step 2: Create a ui library for a public-facing component.
npx nx g @nx/react:lib ui-public-display --directory=libs/ui/public-display --bundler=webpack --style=css --projectNameAndRootFormat=as-provided
Add a tag to its project.json indicating it’s a public UI:
// libs/ui/public-display/project.json
{
"name": "ui-public-display",
"$schema": "../../../node_modules/nx/schemas/project-schema.json",
"sourceRoot": "libs/ui/public-display/src",
"projectType": "library",
"targets": {
// ...
},
"tags": ["scope:ui", "scope:public"] // <--- Add this tag
}
Add a simple public display component to libs/ui/public-display/src/lib/public-display.tsx:
// libs/ui/public-display/src/lib/public-display.tsx
import React from 'react';
export interface PublicDisplayProps {
displayName: string;
}
export function PublicDisplay({ displayName }: PublicDisplayProps) {
return (
<div>
<p>Hello, {displayName}! This is a public display component.</p>
</div>
);
}
export default PublicDisplay;
And export it in libs/ui/public-display/src/index.ts.
Step 3: Define a module boundary rule in nx.json.
We want to prevent scope:public libraries from importing anything tagged scope:sensitive.
// nx.json
{
"affected": { "defaultBase": "main" },
"nxCloudAccessToken": "your-nx-cloud-token",
"workspaceLayout": {
"appsDir": "apps",
"libsDir": "libs"
},
"extends": "nx/presets/npm.json",
"defaultProject": "monolith-to-libs",
"generators": {
"@nx/angular:library": {
"style": "scss"
}
},
"plugins": [
{
"plugin": "@nx/react/plugin",
"options": {
"bundler": "webpack"
}
}
],
"targetDefaults": {
"build": { "cache": true, "dependsOn": ["^build"] },
"lint": { "cache": true, "inputs": ["default", "{workspaceRoot}/.eslintrc.json"] },
"test": { "cache": true, "inputs": ["default", "^(?!.*\\.spec\\.tsx).*{workspaceRoot}/jest.preset.js"] }
},
"defaultBase": "main",
"release": {
"projects": "packages/*"
},
"implicitDependencies": {
"package.json": {
"dependencies": "*",
"devDependencies": "*"
}
},
"namedInputs": {
"default": ["{projectRoot}/**/*", "sharedGlobals"],
"production": ["default", "!{projectRoot}/**/?(*.)+(spec|test).[jt]s?(x)?(.snap)", "!{projectRoot}/tsconfig.spec.json", "!{projectRoot}/.eslintrc.json"],
"sharedGlobals": []
},
"fileInputs": {
"default": ["{workspaceRoot}/.prettierrc", "{workspaceRoot}/prettier.config.js"]
},
"affectedBy": {
"package.json": []
},
"installation": {
"keepExistingVersions": true
},
"declaration": {
"skip": false
},
"enforceModuleBoundaries": [
{
"sourceTag": "scope:public",
"onlyDependOnLibsWithTags": ["scope:public", "scope:shared"]
},
{
"sourceTag": "scope:sensitive",
"onlyDependOnLibsWithTags": ["scope:sensitive", "scope:data-access", "scope:shared"]
}
]
}
Note: The enforceModuleBoundaries array is the key part. I’ve added two rules here. The first one says public UIs can only depend on other public or shared libs. The second says sensitive data libs can only depend on other sensitive, data-access, or shared libs. We are particularly interested in the first rule for this example.
Step 4: Intentionally break the rule by importing data-access-sensitive-user into ui-public-display.
Modify libs/ui/public-display/src/lib/public-display.tsx:
// libs/ui/public-display/src/lib/public-display.tsx
import React from 'react';
// Intentionally importing a sensitive data access library
import { getSensitiveUserDetails } from '@monolith-to-libs/data-access/sensitive-user';
export interface PublicDisplayProps {
displayName: string;
}
export function PublicDisplay({ displayName }: PublicDisplayProps) {
// Even if not used, the import violates the boundary
const sensitiveInfo = getSensitiveUserDetails('some-user-id');
console.log('Attempted sensitive access:', sensitiveInfo);
return (
<div>
<p>Hello, {displayName}! This is a public display component.</p>
<p>This UI should NOT access sensitive data directly!</p>
</div>
);
}
export default PublicDisplay;
Step 5: Run the Nx Linter for the affected projects.
npx nx lint ui-public-display
Expected Output (Lint Failure):
The nx lint command will fail, reporting a module boundary violation. The output will be similar to this:
NX Linter ran for 1 project.
Error:
/home/user/monolith-to-libs/libs/ui/public-display/src/lib/public-display.tsx:3:1
Module @monolith-to-libs/ui/public-display is not allowed to depend on @monolith-to-libs/data-access/sensitive-user.
Neither @monolith-to-libs/data-access/sensitive-user nor its tags [scope:data-access, scope:sensitive] are listed in the 'onlyDependOnLibsWithTags' for 'scope:public'.
1 | import React from 'react';
2 | // Intentionally importing a sensitive data access library
> 3 | import { getSensitiveUserDetails } from '@monolith-to-libs/data-access/sensitive-user';
| ^
4 |
5 | export interface PublicDisplayProps {
6 | displayName: string;
✖ 1 problems (1 error, 0 warnings)
Linting failed.
This demonstrates how Nx’s module boundaries, enforced by the linter, provide a powerful compile-time (or lint-time) access control mechanism. It prevents developers from creating unintended dependencies between different layers or sensitive parts of your monorepo, thereby enhancing security and maintaining architectural integrity. To fix this, you would remove the problematic import from libs/ui/public-display/src/lib/public-display.tsx.
6. Enterprise Nx Cloud Features
Nx Cloud significantly enhances Nx’s capabilities for large teams and enterprise environments, offering advanced features for distributed task execution, build analytics, and custom artifact management.
Advanced Distributed Task Execution (DTE) Configurations
Distributed Task Execution (DTE) in Nx Cloud allows your CI pipeline to run tasks across multiple machines in parallel, dramatically reducing overall CI times, especially for large monorepos.
Deep dive into optimizing agent distribution (--distribute-on), agent scaling, and workload balancing.
--distribute-onParameter:- Purpose: This flag, used with
npx nx-cloud start-ci-run, tells Nx Cloud how many and what type of agents to provision for the current CI run. It’s the primary mechanism for scaling your CI. - Syntax:
--distribute-on="<number> <agent-template-name>"<number>: The desired number of agents.<agent-template-name>: Refers to a pre-defined Nx Cloud launch template that specifies the agent’s machine type, operating system, and possibly pre-installed software.
- Optimization:
- Fixed Scaling: Start with a fixed number of agents (e.g.,
3 linux-medium-js) and observe. - Dynamic Scaling: Nx Cloud can dynamically allocate agents based on the size of the PR. This is crucial for cost optimization as small changes might only need a few agents, while large refactorings benefit from more. (e.g.,
npx nx-cloud start-ci-run --distribute-on="dynamic-agents") - Mixed Agent Types: For polyglot monorepos, you might need different agent types (e.g.,
linux-medium-jsfor Node/React,windows-large-dotnetfor .NET projects). You can specify multiple distributions:--distribute-on="2 linux-medium-js, 1 windows-large-dotnet". Nx Cloud will intelligently assign tasks to appropriate agents.
- Fixed Scaling: Start with a fixed number of agents (e.g.,
- Purpose: This flag, used with
Agent Scaling (Dynamic Agents):
- Nx Cloud offers features for automatically adjusting the number of agents based on the detected workload of a PR. This is an enterprise-level feature that helps optimize cost by only spinning up the necessary resources.
- Mechanism: Nx Cloud analyzes the
affectedprojects, their historical task durations, and dependencies to estimate the total work and then requests an optimal number of agents from your cloud provider (e.g., AWS EC2, GCP Compute Engine). - Benefits: Cost savings, optimal parallelism, reduced idle time for agents.
Workload Balancing (Task-Centric Distribution):
- Mechanism: Unlike traditional CI systems that use VM-centric approaches (where specific tasks are hardcoded to specific machines), Nx Agents use a task-centric approach. Nx Cloud builds a complete task graph (based on your
project.jsontargets and dependencies) and then dynamically assigns individual tasks to available agents. - Key Factors:
- Historical Data: Nx Cloud uses historical run times of tasks to predict duration and prioritize scheduling.
- Task Dependencies: The Nx task graph ensures tasks are executed in the correct order, even across different agents.
- Resource Utilization: Agents are kept busy, reducing idle time. If an agent fails, its tasks can be reassigned.
--stop-agents-after: This parameter tells Nx Cloud when to shut down idle agents. For example,--stop-agents-after="build"will keep agents active until allbuildtasks are completed, and then terminate them, saving costs if subsequent tasks (likee2etests) are not distributed or are run on fewer agents.
- Mechanism: Unlike traditional CI systems that use VM-centric approaches (where specific tasks are hardcoded to specific machines), Nx Agents use a task-centric approach. Nx Cloud builds a complete task graph (based on your
Optimizing DTE involves finding the right balance between cost, speed, and the complexity of your monorepo. Start with reasonable defaults, monitor your CI runs in Nx Cloud analytics, and then fine-tune your distribute-on settings and agent templates.
Hands-on Example: Expand a GitHub Actions ci.yml (from Section 2) to use more sophisticated DTE settings, including custom agent labels/types, and demonstrate npx nx-cloud start-ci-run with these options.
Let’s enhance our ci.yml to use DTE. We will define a custom agent template (conceptually, as actual agent provisioning varies by cloud) and use it.
Prerequisites:
- An Nx Workspace with Nx Cloud connected.
- The
ci.ymlfrom Section 2 (or a similar basic CI setup).
Step 1: Define a conceptual custom agent template in Nx Cloud.
In a real-world scenario, you would define custom launch templates in your Nx Cloud workspace settings (e.g., “Settings” -> “Launch Templates”). For this hands-on, we’ll assume a template named my-custom-js-agent exists, which provides a Linux machine with Node.js pre-installed.
Step 2: Update .github/workflows/ci.yml to use DTE with custom agent types and stop-agents-after.
We’ll modify the npx nx-cloud start-ci-run command and the subsequent nx affected command to leverage DTE. We’ll also add a separate job for agents.
# .github/workflows/ci.yml
name: CI
on:
push:
branches:
- main
- master
pull_request:
types: [opened, synchronize, reopened, ready_for_review]
permissions:
contents: write
actions: read
env:
# Enable DTE for the entire workflow
NX_CLOUD_DISTRIBUTED_EXECUTION: 'true'
NX_CLOUD_ACCESS_TOKEN: ${{ secrets.NX_CLOUD_ACCESS_TOKEN }}
# Set a base for affected commands
NX_BASE: ${{ github.event.pull_request.base.sha || github.sha }}
NX_HEAD: ${{ github.event.pull_request.head.sha || github.sha }}
jobs:
# This is the main job that orchestrates DTE
main:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
token: ${{ secrets.GITHUB_TOKEN }}
fetch-depth: 0 # Important for Nx affected commands
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
# Initialize Nx Cloud CI run and distribute tasks
# Use a conceptual custom agent type and stop agents after 'build' tasks
- name: Start Nx Cloud CI Run with DTE
run: npx nx-cloud start-ci-run --distribute-on="3 my-custom-js-agent" --stop-agents-after="build"
- name: Run affected lint, test, build
# Nx will automatically distribute these tasks to the agents
run: npx nx affected --target=lint,test,build --max-parallel=3 --configuration=ci
continue-on-error: true # Allow main job to proceed even if tasks fail, so self-healing can run
- name: Nx Cloud Self-Healing CI
run: npx nx-cloud fix-ci
if: always()
env:
NX_CLOUD_ACCESS_TOKEN: ${{ secrets.NX_CLOUD_ACCESS_TOKEN }}
# This job defines the agent machines
agents:
runs-on: ubuntu-latest # Or your custom agent's OS
strategy:
matrix:
# Define how many agent instances to run. This needs to match the number in --distribute-on
agent: [1, 2, 3]
name: Nx Agent ${{ matrix.agent }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
# Start the Nx Agent process
- name: Start Nx Agent ${{ matrix.agent }}
run: npx nx-cloud start-agent
env:
NX_AGENT_NAME: ${{ matrix.agent }} # Assign a unique name for logging
NX_CLOUD_ACCESS_TOKEN: ${{ secrets.NX_CLOUD_ACCESS_TOKEN }}
Explanation:
NX_CLOUD_DISTRIBUTED_EXECUTION: 'true': This environment variable, when set, tells Nx to automatically distribute all subsequent Nx commands within the workflow.--distribute-on="3 my-custom-js-agent":3: Requests 3 agents.my-custom-js-agent: This is a placeholder for a custom launch template you would configure in Nx Cloud. It might specify a machine type, Docker image, or other settings specific to your environment.
--stop-agents-after="build": Agents will automatically shut down after allbuildtargets are completed by any distributed task. This is useful if subsequent tasks (likee2etests) are run on different agents or not distributed.agentsjob: This new job is responsible for starting the Nx agents. Thestrategy.matrixensures three instances of this job run, each executingnpx nx-cloud start-agent. These agents then connect to Nx Cloud and wait for tasks to be assigned from themainjob.
Step 3: Commit and push the changes, then create a Pull Request.
git add .
git commit -m "chore: Configure advanced DTE in CI"
git push origin <your-branch-name>
Create a PR to trigger the workflow.
Expected Outputs:
- GitHub Actions Run: You will see multiple jobs running concurrently:
main: This job will start first, triggernpx nx-cloud start-ci-run.Nx Agent 1,Nx Agent 2,Nx Agent 3: These jobs will start concurrently. Each will runnpx nx-cloud start-agentand wait for tasks.- The
nx affected --target=lint,test,buildcommand in themainjob will not run tasks locally. Instead, Nx will register these tasks with Nx Cloud, which will then assign them to the waiting agents (Nx Agent 1,Nx Agent 2,Nx Agent 3). - The tasks (lint, test, build for affected projects) will be executed on the agent machines.
- Once all
buildtasks are completed, Nx Cloud will signal agents to shut down (due to--stop-agents-after="build"), potentially before other tasks finish if they were not intended for these agents. - The logs from the individual tasks will be streamed back to the
mainjob’s console and visible in the Nx Cloud UI, providing a consolidated view of the distributed run.
This hands-on example demonstrates how to configure advanced DTE with custom agent types and granular control over agent lifecycle, crucial for optimizing CI performance and cost in large-scale Nx monorepos.
Build Metrics and Analytics
Understanding the performance of your CI/CD pipeline is critical for continuous optimization. Nx Cloud provides rich build metrics and analytics to help identify bottlenecks and improve efficiency.
Explanation of how Nx Cloud provides insights into build times, cache hits/misses, and bottlenecks.
Nx Cloud offers a comprehensive dashboard and detailed reports for every CI run, providing insights into:
- Overall Build Time: The total time taken for a CI run, broken down into setup, execution, and teardown phases. This helps track the primary metric for CI performance.
- Task Durations: Detailed breakdown of how long each individual Nx task (e.g.,
build,test,lint) took. This highlights the slowest parts of your pipeline. - Cache Hit/Miss Ratio (Nx Replay):
- Hits: Indicates how many tasks were replayed from the cache (either local or remote), saving computation time. A high hit ratio is desirable.
- Misses: Indicates tasks that had to be run from scratch because no cached output was available.
- Insights: A low cache hit ratio points to potential issues in your caching setup (e.g., non-deterministic tasks, missing cache inputs, incorrect
nx.jsontargetDefaults). High misses on frequently changed projects might indicate poor library architecture.
- Distributed Task Execution Metrics:
- Agent Utilization: How effectively agents were used, including idle time, active time, and task distribution across agents.
- Task Distribution Visualizations: Graphs showing how tasks were parallelized across agents, helping to identify sequential bottlenecks that prevent full parallelism.
- Critical Path Analysis: Nx Cloud can highlight the “critical path” of tasks – the longest sequence of dependent tasks that determines the overall CI run time. Optimizing tasks on this path yields the greatest improvements.
- Comparison Views: Compare the current CI run against previous runs (e.g.,
mainbranch, previous PR runs) to understand performance regressions or improvements. - Resource Usage (CPU/Memory): (Depending on agent integration) Metrics on CPU and memory usage during task execution can help identify resource-intensive tasks or agents that are under/over-provisioned.
- Flakiness Detection: Nx Cloud can identify flaky tests or tasks that sporadically fail, which consume valuable CI time and erode developer trust.
By analyzing these metrics, teams can make data-driven decisions to:
- Optimize
nx.jsoninputsfor better caching. - Refactor long-running tasks or projects into smaller, more parallelizable units.
- Adjust DTE settings (number of agents,
stop-agents-after). - Identify and fix flaky tests.
- Justify infrastructure upgrades or agent template changes.
Hands-on Example: (Conceptual, as this is often UI-driven) Show how to access and interpret Nx Cloud build reports via the web UI, highlighting key metrics for performance optimization.
Since this is primarily a UI-driven experience, the “hands-on” part will be a guided walkthrough of what you would do in the Nx Cloud dashboard.
Step 1: Trigger a CI run with your Nx Workspace. Push a new commit or open a PR in your GitHub repository where Nx Cloud is configured (as demonstrated in Section 2).
Step 2: Access the Nx Cloud Dashboard.
- Go to
https://nx.app. - Log in to your Nx Cloud account.
- Navigate to your workspace.
- You will see a list of recent CI runs. Click on the latest run that you just triggered.
Step 3: Interpret the Build Report Overview.
- Summary Card: At the top, you’ll see a summary with:
- Total Duration: The total time taken for the CI run.
- Total Saved: Time saved by caching and DTE.
- Cache Hits/Misses: A percentage and count. High cache hits are good. If this is low for tasks that haven’t changed, investigate your
inputsinnx.json. - Agent Count: How many agents were used.
- Run Details:
- Graph Visualization: The project graph (
nx graph) will show which projects were affected and which tasks ran. You can often see the critical path highlighted. - Tasks Table: A table listing all tasks that ran, their duration, status (hit/miss), and the agent they ran on. Sort by duration to identify slow tasks.
- Graph Visualization: The project graph (
Step 4: Deep Dive into a Slow Task (e.g., a build task with high duration).
- Click on a specific slow
buildtask in the “Tasks” table. - Task Details:
- Logs: Review the full terminal logs for the task. Look for warnings or patterns indicating inefficiencies (e.g., repetitive steps, slow third-party tools).
- Inputs/Outputs: See what files and environment variables were considered inputs to this task. Ensure all relevant factors are included for accurate caching.
- Dependencies: Understand which other tasks this task depends on. This helps with critical path analysis.
- Performance Insights:
- If a
buildtask has a low cache hit ratio despite minimal code changes, it might indicate non-deterministic outputs or missing inputs in itsproject.jsondefinition. - If a
testtask is consistently slow, consider splitting it (e.g.,nx affected --target=test --base=main --exclude=e2eand then run e2e in a separate job with DTE’ssplit-e2e-tasksfeature).
- If a
Step 5: Analyze DTE and Agent Performance.
- If you used DTE (as configured in the previous section), navigate to the “Agents” tab or look for DTE-specific graphs.
- Agent Timeline: Observe how tasks were distributed across agents over time. Look for:
- Idle Gaps: Periods where agents were idle, indicating either a lack of parallelizable tasks or too many agents provisioned for the workload.
- Uneven Distribution: If one agent finishes much earlier than others, it suggests an imbalance.
- Recommendation: If you see significant idle time or uneven distribution, consider adjusting the number of agents (
--distribute-on) or evaluating if more tasks can be parallelized. Using dynamic agents can help optimize this automatically.
Step 6: Use Comparison Features.
- Select “Compare Run” from the report. Compare your current PR’s run against the latest
mainbranch run or a previous PR run. - Spot Regressions/Improvements: This helps quickly identify if your changes improved or worsened CI performance, cache hit rates, etc.
By regularly reviewing these metrics and reports in Nx Cloud, you can continuously refine your Nx Workspace and CI/CD strategy to achieve faster feedback loops and more efficient resource utilization.
Custom Build Artifacts & Storage
Nx Cloud provides intelligent remote caching, but sometimes you need to manage custom artifacts or influence how the cache behaves.
Discuss custom cache configuration and artifact management within Nx Cloud.
Remote Cache Configuration:
nx.jsontargetDefaultsandinputs/outputs: This is the primary way to configure what goes into the cache.inputs: Define which files, environment variables, and other projects’ outputs influence a task’s hash. Changes to these inputs will result in a cache miss. Crucial for deterministic caching.outputs: Define which files/directories a task produces. These are the files that get cached.
- Cache Groups: For complex scenarios, you can define named
cachegroups innx.jsonto group relatedinputsfor specific types of tasks, improving cache precision.
Custom Artifacts (Beyond Task Outputs):
- While Nx Cloud primarily caches the outputs of your Nx tasks, you can also store arbitrary artifacts in your CI/CD pipeline using your CI provider’s native artifact storage (e.g., GitHub Actions Artifacts).
- Use Cases:
- Storing code coverage reports for later analysis.
- Archiving build logs not captured by Nx Cloud.
- Persisting E2E test screenshots/videos.
- Storing compliance reports or security scan results.
- Integration: In your
ci.yml, after an Nx task runs, you can useactions/upload-artifactto upload specific files fromdist/orcoverage/directories.
# Example: Uploading code coverage report - name: Run tests with coverage run: npx nx affected --target=test --with-deps --coverage - name: Upload code coverage report uses: actions/upload-artifact@v4 with: name: coverage-report path: coverage/Cache Security (
nx.dev/ci/concepts/cache-security):- Read-Only vs. Read-Write Tokens: Use read-only Nx Cloud tokens in
nx.jsonfor general access, and read-write tokens only in trusted CI environments (e.g.,mainbranch builds) via GitHub Secrets. This prevents malicious actors from poisoning your remote cache. - End-to-End Encryption: Nx Cloud offers end-to-end encryption for cached artifacts, ensuring your data is encrypted at rest and in transit, with the encryption key managed by your workspace.
- CVE-2025-36852 (CREEP Vulnerability): Nx Cloud is designed to prevent cache poisoning vulnerabilities like CREEP (Cache Race-condition Exploit Enables Poisoning) by enforcing clear trust boundaries and hierarchical caching. DIY remote caches are often vulnerable.
- Read-Only vs. Read-Write Tokens: Use read-only Nx Cloud tokens in
Key Takeaway: While Nx Cloud’s remote caching is powerful, understanding how to fine-tune inputs and outputs in nx.json is crucial for maximizing its effectiveness and ensuring cache integrity. For artifacts that are not direct task outputs but are important for later stages, leverage your CI provider’s artifact storage.
7. Advanced Production Deployment (Monorepo-Specific Challenges & Solutions)
Deploying applications from a monorepo introduces unique complexities, especially when dealing with multiple independent applications, differing release cycles, and shared infrastructure. Nx provides powerful primitives that, when combined with CI/CD tools, enable sophisticated and efficient deployment strategies.
Granular Deployment of Affected Projects
In a monorepo, you rarely want to deploy everything on every change. Nx’s affected command is the cornerstone for enabling granular, intelligent deployments that only act on projects that have truly changed.
Detailed explanation of how to build upon nx affected to create deployment jobs that only run for specific applications.
The
nx affectedFoundation:npx nx affected --target=<target-name>: Runs a specific target (e.g.,build,test,deploy) only for projects affected by the current changes.npx nx affected --target=<target-name> --output-style=json: Provides machine-readable JSON output of the affected projects and their associated task configurations. This is critical for programmatic parsing in CI.npx nx print-affected --type=app --json: A more direct way to get affected applications.
Identifying Affected Applications Programmatically:
- The core idea is to get a list of affected applications, iterate through them, and for each, trigger a specific deployment logic.
- Bash scripting combined with
jq(a JSON processor) is a common pattern in CI for this.
Deployment Orchestration:
- Separate CI Jobs: For distinct deployment processes (e.g., frontend to S3, backend to Kubernetes), create separate CI jobs.
- Conditional Execution: Use
ifconditions in your CI workflow to run deployment jobs only when specific applications are affected. - App-Specific Variables: Each application might require unique deployment parameters (e.g., S3 bucket name, Docker image repository, Kubernetes namespace). These can be stored as:
- Environment Variables: In CI secrets.
project.jsontargets.deploy.options: Custom options defined directly in the project’sproject.json.- Custom Config Files: JSON/YAML files within the app’s directory.
nx:run-commandswith Dynamic Arguments:- The
nx:run-commandsexecutor is incredibly flexible for custom deployment scripts. You can pass dynamic arguments to these scripts.
- The
Hands-on Example: Refine a GitHub Actions workflow (e.g., the AWS S3 frontend deployment from the previous guide) to intelligently detect which specific app is affected and trigger its unique deployment process, demonstrating how to pass app-specific variables (e.g., bucket names, image tags). Use nx show projects --affected --json for programmatic parsing.
We’ll assume two React applications (my-frontend-admin and my-frontend-public), each needing to deploy to a different S3 bucket.
Prerequisites:
- An Nx Workspace.
- Two React applications:
my-frontend-adminandmy-frontend-public.- Create them if they don’t exist:
npx nx g @nx/react:app my-frontend-admin --directory=apps/my-frontend-admin --bundler=webpack --style=css --projectNameAndRootFormat=as-provided npx nx g @nx/react:app my-frontend-public --directory=apps/my-frontend-public --bundler=webpack --style=css --projectNameAndRootFormat=as-provided
- Create them if they don’t exist:
- AWS CLI installed on the CI runner (GitHub Actions has this by default on
ubuntu-latest). - AWS credentials configured in GitHub Secrets:
AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY. - Two S3 buckets created:
my-admin-frontend-bucketandmy-public-frontend-bucket.
Step 1: Define deployment-specific options in project.json for each application.
We’ll add a custom deploy-s3 target with an s3Bucket option for each frontend.
apps/my-frontend-admin/project.json (partial):
{
"name": "my-frontend-admin",
// ...
"targets": {
"build": { /* ... existing build config ... */ },
"deploy-s3": {
"executor": "nx:run-commands",
"options": {
"command": "aws s3 sync {options.outputPath} s3://{options.s3Bucket} --delete --exclude '*-manifest.json'",
"outputPath": "dist/apps/my-frontend-admin",
"s3Bucket": "my-admin-frontend-bucket" // Application-specific variable
}
},
// ...
},
"tags": ["type:app", "scope:admin"]
}
apps/my-frontend-public/project.json (partial):
{
"name": "my-frontend-public",
// ...
"targets": {
"build": { /* ... existing build config ... */ },
"deploy-s3": {
"executor": "nx:run-commands",
"options": {
"command": "aws s3 sync {options.outputPath} s3://{options.s3Bucket} --delete --exclude '*-manifest.json'",
"outputPath": "dist/apps/my-frontend-public",
"s3Bucket": "my-public-frontend-bucket" // Application-specific variable
}
},
// ...
},
"tags": ["type:app", "scope:public"]
}
Step 2: Create a GitHub Actions workflow (.github/workflows/deploy.yml).
This workflow will have a single deploy-affected job that dynamically identifies affected applications and runs their deploy-s3 target.
# .github/workflows/deploy.yml
name: Deploy Affected Frontend Applications
on:
push:
branches:
- main
- master
permissions:
contents: read # Only read access needed for checkout and Nx affected calculations
jobs:
deploy-affected-frontends:
runs-on: ubuntu-latest
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: us-east-1 # Configure your AWS region
NX_BASE: ${{ github.event.pull_request.base.sha || github.sha }}
NX_HEAD: ${{ github.event.pull_request.head.sha || github.sha }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0 # Needed for nx affected commands
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ env.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ env.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Find and Build Affected Frontend Applications
id: build_affected
run: |
echo "Finding affected frontend applications..."
# Get all affected React applications. Filters by type:app for applications
# and then by framework or tags if more granular filtering is needed.
# For React apps specifically, you could add project.json contains '@nx/react'
AFFECTED_APPS=$(npx nx show projects --affected --type=app --json | jq -r '.[]')
if [ -z "$AFFECTED_APPS" ]; then
echo "No affected frontend applications to build. Skipping."
echo "::set-output name=has_affected_apps::false"
else
echo "Affected frontend applications: $AFFECTED_APPS"
echo "::set-output name=has_affected_apps::true"
for APP_NAME in $AFFECTED_APPS; do
echo "Building application: $APP_NAME"
npx nx build "$APP_NAME" --configuration=production # Build only affected apps
done
fi
- name: Deploy Affected Frontend Applications to S3
if: steps.build_affected.outputs.has_affected_apps == 'true'
run: |
echo "Finding affected frontend applications for deployment..."
AFFECTED_APPS=$(npx nx show projects --affected --type=app --json | jq -r '.[]')
for APP_NAME in $AFFECTED_APPS; do
# Get the project.json configuration for the current app
APP_CONFIG=$(npx nx show project "$APP_NAME" --json)
# Extract s3Bucket and outputPath from the project.json's deploy-s3 target options
S3_BUCKET=$(echo "$APP_CONFIG" | jq -r '.targets."deploy-s3".options.s3Bucket')
OUTPUT_PATH=$(echo "$APP_CONFIG" | jq -r '.targets."deploy-s3".options.outputPath')
if [ -n "$S3_BUCKET" ] && [ -n "$OUTPUT_PATH" ]; then
echo "Deploying $APP_NAME to s3://$S3_BUCKET from $OUTPUT_PATH"
# Execute the custom deploy-s3 target for the specific app
npx nx deploy-s3 "$APP_NAME"
echo "$APP_NAME deployed successfully."
else
echo "Skipping deployment for $APP_NAME: s3Bucket or outputPath not found in deploy-s3 target."
fi
done
Step 3: Make a change to apps/my-frontend-admin/src/app/app.tsx.
// apps/my-frontend-admin/src/app/app.tsx
// ...
export function App() {
// Adding a new line to make this app 'affected'
const adminFeature = "Admin Panel";
return (
// ...
);
}
// ...
Step 4: Commit and push to main (or create a PR to main).
git add .
git commit -m "feat: Update admin frontend for deployment demo"
git push origin main
Expected Outputs:
- GitHub Actions Run:
- The
deploy-affected-frontendsjob will start. - The
Find and Build Affected Frontend Applicationsstep will correctly identifymy-frontend-adminas affected. It will then runnpx nx build my-frontend-admin --configuration=production. - The
Deploy Affected Frontend Applications to S3step will then:- Identify
my-frontend-adminagain. - Extract
my-admin-frontend-bucketas thes3Bucketfrommy-frontend-admin’sproject.json. - Execute
npx nx deploy-s3 my-frontend-admin, which will run theaws s3 synccommand targetings3://my-admin-frontend-bucket. my-frontend-publicwill not be built or deployed because it was not affected by the changes.
- Identify
- The
This example showcases how nx show projects --affected --json can be programmatically parsed (using jq) to create intelligent CI/CD pipelines that deploy only the applications that truly need it, using their specific configurations. This is critical for efficient, fast, and safe deployments in large monorepos.
Version Pinning & Rollbacks in a Monorepo
Managing versions and enabling rollbacks in a monorepo, especially with micro-frontends and interconnected services, is crucial for maintaining stability and rapidly responding to incidents.
Strategies for rolling back individual micro-frontends or backend services without affecting the entire system.
Immutable Deployments & Versioned Artifacts:
- Mechanism: Every deployment produces a new, unique, immutable artifact (e.g., Docker image with a unique tag, S3 bucket versioned objects, a new CDN deployment). Old artifacts are retained.
- Rollback Strategy: To rollback, simply point your infrastructure (e.g., Kubernetes deployment, CDN distribution, load balancer) to a previous, known-good artifact version. This is fast and low-risk.
- Nx Relevance:
nx buildoutputs should be considered immutable. Use Git SHAs or semantic versions as Docker image tags or S3 path prefixes.
Feature Flags / Kill Switches:
- Mechanism: Wrap new or potentially risky features in feature flags. These can be toggled on/off at runtime without redeployment.
- Rollback Strategy: If a new feature causes issues, simply disable its feature flag.
- Nx Relevance: Feature flags themselves can be managed in a shared Nx library, and configured per application.
Blue/Green or Canary Deployments:
- Mechanism:
- Blue/Green: Deploy a new version (Green) alongside the old (Blue). Route all traffic to Green once it’s validated. If issues arise, switch back to Blue instantly.
- Canary: Gradually roll out a new version to a small subset of users (Canary group). Monitor for errors, and if stable, expand to all users. If issues, rollback the Canary group.
- Rollback Strategy: Fast traffic switching for Blue/Green; stopping the Canary deployment and routing traffic back for Canary.
- Nx Relevance: Your Nx applications and services are the units being deployed. The CI/CD pipeline (orchestrated with Nx commands) handles the actual blue/green/canary logic using cloud provider tools (e.g., AWS CodeDeploy, Kubernetes Ingress/Service meshes).
- Mechanism:
Database Migrations:
- Challenge: Database schema changes are often difficult to roll back.
- Strategy: Design migrations to be backward-compatible (e.g., add new columns, don’t remove old ones immediately). Use versioned database migration tools (e.g., Flyway, Liquibase, TypeORM migrations).
- Nx Relevance: Database migration scripts can live in an Nx data-access or utility library. A dedicated Nx executor can run these migrations in CI/CD.
How to handle version compatibility between a host and its dynamically loaded remotes during a rollback.
This is where Module Federation’s shared dependency management becomes critical, as discussed in Section 3.2.
- Strict Versioning (
strictVersion: true):- Mechanism: Ensures that the host and remote use exactly the same version of a shared dependency. If a remote is rolled back to an older version of a shared library, and the host has moved forward,
strictVersion: truewill cause a runtime error, preventing potentially subtle bugs from version mismatches. - Rollback Impact: Forces either the host or the remote to also rollback its shared dependency version, or to update the other component to match. This can make rollbacks more coordinated but prevents “silent” failures.
- Mechanism: Ensures that the host and remote use exactly the same version of a shared dependency. If a remote is rolled back to an older version of a shared library, and the host has moved forward,
- Semantic Versioning (
requiredVersion: '^x.y.z'):- Mechanism: Allows compatible versions (e.g.,
^1.0.0means any1.x.xversion). - Rollback Impact: If a host expects
^1.0.0and a remote is rolled back to a1.x.xversion, it might work without issues. However, if the rollback goes to a0.x.x(breaking change), issues will arise.
- Mechanism: Allows compatible versions (e.g.,
- Singleton Dependencies:
- Mechanism: Use
singleton: truefor libraries that must have only one instance (e.g., React, state management stores). - Rollback Impact: If a host rolls back, it will load its older singleton. If a remote tries to load a different version, it will attempt to use the host’s version. This generally leads to more stable behavior for core libraries.
- Mechanism: Use
- External Version Registry:
- Mechanism: Maintain an external registry (API or JSON file) that dictates which versions of remotes and shared libraries are compatible with which host version.
- Rollback Strategy: When rolling back a host, the external registry would ensure it loads compatible versions of remotes. When rolling back a remote, the registry would be updated to reflect its older version, and potentially notify hosts.
- Nx Relevance: The
remotes-config.tsfrom Section 3.1 could fetch this from an external source.
Best Practice: Use a combination of immutable deployments, strong versioning for shared dependencies (singleton: true, strictVersion: true for critical ones, semantic for others), and a clear rollback strategy defined per application and for the host. When a remote is rolled back, test its integration thoroughly with the current host and other remotes.
Cross-Project Release & Deployment Coordination
Orchestrating releases of interdependent projects in a monorepo is a complex task. Nx’s powerful affected command combined with custom executors and release features streamlines this.
Discussion on orchestrating releases of multiple interdependent projects (e.g., API and its client library).
The challenge arises when changes in a core library (e.g., an API client) require updates and subsequent deployments of multiple dependent applications (e.g., several frontends that consume that client).
- Nx Release:
- Mechanism: Nx 16.8+ introduced
nx release, a powerful tool for automating versioning, changelog generation, and publishing of projects within your monorepo. It can handle independent or fixed/synchronized versioning strategies. - Orchestration: Configure
nx releaseto identify affected projects, increment their versions, generate changelogs, and then trigger theirbuildandpublishtargets.
- Mechanism: Nx 16.8+ introduced
- Chained
affectedCommands:- Mechanism: Run
nx affectedcommands sequentially or in parallel, where the output of one influences the next. - Example: First, build affected backend services. Then, build affected client libraries that depend on those services. Finally, build and deploy affected frontends that use those client libraries.
- Mechanism: Run
- Custom Nx Executors/Generators for Deployment Orchestration:
- Mechanism: Create your own custom Nx executors (e.g.,
my-company:release-orchestrator) that encapsulate complex, multi-step deployment logic. These executors can:- Query the Nx project graph (
readNxJson,readProjectConfiguration,projectGraph) to understand dependencies. - Run other Nx targets (
runExecutor). - Execute custom shell commands.
- Query the Nx project graph (
- Benefits: Centralized, testable, and reusable deployment logic.
- Mechanism: Create your own custom Nx executors (e.g.,
- Pipeline-as-Code (YAML Orchestration):
- Mechanism: Use your CI/CD platform’s native scripting (e.g., GitHub Actions YAML, GitLab CI
.gitlab-ci.yml) to orchestrate the steps. - Benefits: Fully version-controlled alongside your code.
- Limitations: Can become verbose and less reusable across different monorepos compared to Nx executors.
- Mechanism: Use your CI/CD platform’s native scripting (e.g., GitHub Actions YAML, GitLab CI
Key Principle: Identify the “release ripple effect.” A change in a low-level library can ripple up through its dependents. Your release orchestration should account for this, ensuring dependent projects are updated, rebuilt, and deployed in the correct order.
Hands-on Example: Create a deploy-all-affected-api-and-clients custom executor or CI script that: 1) Detects affected backend APIs. 2) Deploys them. 3) Then detects affected API client libraries. 4) Generates new versions/builds of frontend apps dependent on those client libraries. 5) Deploys those frontends. This demonstrates a chained, intelligent deployment.
This is a complex scenario, best implemented with a combination of CI scripting and Nx commands. We will use a GitHub Actions script for clarity.
Prerequisites:
- An Nx Workspace.
- A Node.js API application:
my-api. - A TypeScript client library for
my-api:api-client. - A React frontend application that uses
api-client:my-frontend.- Create these if they don’t exist:
npx nx g @nx/node:app my-api --directory=apps/my-api --compiler=tsc --projectNameAndRootFormat=as-provided npx nx g @nx/js:lib api-client --directory=libs/api/client --compiler=tsc --projectNameAndRootFormat=as-provided npx nx g @nx/react:app my-frontend --directory=apps/my-frontend --bundler=webpack --style=css --projectNameAndRootFormat=as-provided
- Create these if they don’t exist:
- Crucially:
my-apishould expose an API,api-clientshould usemy-api(e.g., viabuildableoutput or direct import for dev), andmy-frontendshould depend onapi-client.- Add
my-apias an implicit dependency toapi-client’sproject.jsonfor accurateaffecteddetection.// libs/api/client/project.json { "name": "api-client", // ... "implicitDependencies": ["my-api"] // <--- Add this }
- Add
- AWS credentials and S3 buckets set up for frontend deployment (as in the previous example) and a conceptual deployment for
my-api(e.g., to an EC2 instance or Lambda).
Step 1: Define deploy targets in project.json for my-api and my-frontend.
apps/my-api/project.json (partial):
{
"name": "my-api",
// ...
"targets": {
"build": { /* ... */ },
"deploy": {
"executor": "nx:run-commands",
"options": {
"command": "echo 'Deploying my-api to production...' && echo 'Simulating API deployment for my-api version {options.tag}'",
"tag": "latest" // Placeholder, can be dynamic
}
}
}
}
apps/my-frontend/project.json (partial):
{
"name": "my-frontend",
// ...
"targets": {
"build": { /* ... */ },
"deploy": {
"executor": "nx:run-commands",
"options": {
"command": "echo 'Deploying my-frontend to S3...' && echo 'aws s3 sync dist/apps/my-frontend s3://my-frontend-bucket --delete'",
"outputPath": "dist/apps/my-frontend",
"s3Bucket": "my-frontend-bucket"
}
}
}
}
Step 2: Create a CI workflow (.github/workflows/orchestrated-deploy.yml).
This workflow will use a series of npx nx affected commands and jq parsing to orchestrate the deployments.
# .github/workflows/orchestrated-deploy.yml
name: Orchestrated API and Frontend Deployment
on:
push:
branches:
- main
- master
permissions:
contents: read
jobs:
orchestrated-deploy:
runs-on: ubuntu-latest
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: us-east-1
# Essential for Nx affected commands
NX_BASE: ${{ github.event.pull_request.base.sha || github.sha }}
NX_HEAD: ${{ github.event.pull_request.head.sha || github.sha }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ env.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ env.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: 1. Detect and Deploy Affected Backend APIs
id: deploy_apis
run: |
echo "Detecting affected backend API applications..."
# Find affected apps that are of type 'app' and are our API (or use tags for more precision)
AFFECTED_APIS=$(npx nx show projects --affected --type=app --json | jq -r 'map(select(. == "my-api")) | .[]')
if [ -z "$AFFECTED_APIS" ]; then
echo "No affected backend APIs to deploy."
echo "::set-output name=apis_deployed::false"
else
echo "Affected APIs: $AFFECTED_APIS"
echo "Building affected APIs..."
npx nx affected --target=build --type=app --filter="my-api" # Build affected API only
for API_NAME in $AFFECTED_APIS; do
echo "Deploying API: $API_NAME"
npx nx deploy "$API_NAME" # Execute the deploy target
done
echo "::set-output name=apis_deployed::true"
fi
- name: 2. Detect and Build Affected API Client Libraries (if APIs were deployed)
id: build_clients
# This step runs if APIs were deployed OR if client libs were changed directly
run: |
echo "Detecting affected API client libraries..."
# Find affected libs that are our api-client (or use tags)
AFFECTED_CLIENTS=$(npx nx show projects --affected --type=lib --json | jq -r 'map(select(. == "api-client")) | .[]')
if [ -z "$AFFECTED_CLIENTS" ]; then
echo "No affected API client libraries to build."
echo "::set-output name=clients_built::false"
else
echo "Affected client libraries: $AFFECTED_CLIENTS"
for CLIENT_LIB_NAME in $AFFECTED_CLIENTS; do
echo "Building client library: $CLIENT_LIB_NAME"
npx nx build "$CLIENT_LIB_NAME" # Build the client library
done
echo "::set-output name=clients_built::true"
fi
- name: 3. Detect, Build, and Deploy Affected Frontends (if APIs deployed or Clients built)
id: deploy_frontends
if: steps.deploy_apis.outputs.apis_deployed == 'true' || steps.build_clients.outputs.clients_built == 'true'
run: |
echo "Detecting affected frontend applications..."
# Find affected apps that are our frontend (or use tags)
# We need to consider frontends affected by API changes OR client lib changes
AFFECTED_FRONTENDS=$(npx nx show projects --affected --type=app --json | jq -r 'map(select(. == "my-frontend")) | .[]')
if [ -z "$AFFECTED_FRONTENDS" ]; then
echo "No affected frontend applications to build/deploy."
else
echo "Affected frontend applications: $AFFECTED_FRONTENDS"
for FRONTEND_NAME in $AFFECTED_FRONTENDS; do
echo "Building frontend: $FRONTEND_NAME"
npx nx build "$FRONTEND_NAME" --configuration=production # Rebuild frontend due to client lib changes
echo "Deploying frontend: $FRONTEND_NAME"
npx nx deploy "$FRONTEND_NAME" # Execute the deploy target
done
fi
Step 3: Make a change in apps/my-api/src/main.ts (to make it and its dependents affected).
// apps/my-api/src/main.ts
// ...
console.log('API started successfully! (v2)'); // Made a small change
// ...
Step 4: Commit and push to main.
git add .
git commit -m "feat: Update API, triggering chained deployment"
git push origin main
Expected Outputs:
- GitHub Actions Run:
- 1. Detect and Deploy Affected Backend APIs:
my-apiwill be detected as affected.npx nx build my-apiwill run.npx nx deploy my-apiwill run, printing “Deploying my-api to production…”
- 2. Detect and Build Affected API Client Libraries:
- Because
my-apiwas affected, andapi-clienthasmy-apias animplicitDependency,api-clientwill also be detected as affected. npx nx build api-clientwill run.
- Because
- 3. Detect, Build, and Deploy Affected Frontends:
- Because
api-clientwas affected (andmy-frontenddepends onapi-client),my-frontendwill be detected as affected. npx nx build my-frontend --configuration=productionwill run.npx nx deploy my-frontendwill run, printing “Deploying my-frontend to S3…”
- Because
- 1. Detect and Deploy Affected Backend APIs:
This sophisticated CI script demonstrates a chained, intelligent deployment strategy. A single change to my-api triggers the correct sequence of builds and deployments for all downstream affected projects (API itself, its client, and the frontend consuming the client), ensuring consistency and efficient updates across interdependent parts of your monorepo.
Infrastructure-as-Code (IaC) within Nx
Managing your infrastructure alongside your application code in a monorepo is a powerful pattern. It brings consistency, version control, and leverages Nx’s graph for impact analysis.
Managing Terraform, Pulumi, or CloudFormation alongside application code in Nx.
- Motivation:
- Co-location: Infrastructure changes are often directly tied to application changes (e.g., a new service needs a new database). Co-locating them simplifies development and review.
- Version Control: Both app code and IaC are managed in the same Git repository.
- Consistency: Use Nx generators to standardize IaC configurations.
- Visibility:
nx graphcan show dependencies between applications and their underlying infrastructure, aiding impact analysis.
- Nx Integration Strategy:
- Dedicated IaC Libraries: Create Nx libraries specifically for your IaC configurations (e.g.,
libs/infra/aws-vpc,libs/infra/k8s-cluster).- Providers: Each library might encapsulate a specific cloud provider’s resources or a logical infrastructure unit.
- Modularity: Treat IaC modules like code modules – small, focused, and reusable.
- Custom Nx Executors: Develop custom Nx executors (or use
nx:run-commands) that wrap your IaC CLI tools (Terraform, Pulumi, AWS CDK, CloudFormation).nx terraform:apply <project-name>nx pulumi:up <project-name>nx cloudformation:deploy <project-name>
- Dependency Awareness:
implicitDependencies: UseimplicitDependenciesin yourproject.jsonfiles to show that an application depends on a specific infrastructure library. This ensures that if the infrastructure changes, the application is also considered “affected” (e.g., requiring a rebuild or redeployment that pulls the latest infra configuration).outputsandinputs: If an IaC task produces an output (e.g., a deployed URL) that is consumed by an application build, define these inproject.jsonfor proper caching and dependency tracking.
- CI/CD Integration: Integrate the Nx IaC executors into your CI/CD pipelines. This ensures that infrastructure changes are validated and deployed automatically (or after review).
- Dedicated IaC Libraries: Create Nx libraries specifically for your IaC configurations (e.g.,
Hands-on Example: Create a simple Nx library for Terraform configurations. Demonstrate how to run nx terraform apply for affected infrastructure changes.
We’ll create a simple Terraform configuration to create an S3 bucket and then execute it using a custom Nx executor.
Prerequisites:
- An Nx Workspace.
- Terraform CLI installed locally and on your CI runner.
- AWS credentials configured (environment variables or
~/.aws/credentials).
Step 1: Create an Nx library for Terraform configurations.
npx nx g @nx/js:lib infra-s3-bucket --directory=libs/infra/s3-bucket --compiler=none --projectNameAndRootFormat=as-provided
Note: --compiler=none is used as this library will hold Terraform files, not TypeScript/JavaScript.
Step 2: Add Terraform files to the new library.
Create libs/infra/s3-bucket/main.tf:
# libs/infra/s3-bucket/main.tf
resource "aws_s3_bucket" "my_tf_bucket" {
bucket = "nx-monorepo-tf-bucket-${var.environment}"
acl = "private"
tags = {
Environment = var.environment
ManagedBy = "NxTerraform"
}
}
variable "environment" {
description = "Deployment environment (e.g., dev, staging, prod)"
type = string
default = "dev"
}
output "bucket_name" {
value = aws_s3_bucket.my_tf_bucket.bucket
description = "The name of the created S3 bucket"
}
Note: Replace nx-monorepo-tf-bucket with a globally unique name. var.environment will be passed dynamically.
Step 3: Define a custom terraform:apply executor in libs/infra/s3-bucket/project.json.
This executor will wrap the terraform apply command.
// libs/infra/s3-bucket/project.json
{
"name": "infra-s3-bucket",
"$schema": "../../../node_modules/nx/schemas/project-schema.json",
"sourceRoot": "libs/infra/s3-bucket",
"projectType": "library",
"targets": {
"init": {
"executor": "nx:run-commands",
"options": {
"command": "terraform init -backend-config=path={options.backendConfigPath}",
"cwd": "libs/infra/s3-bucket"
}
},
"plan": {
"executor": "nx:run-commands",
"options": {
"command": "terraform plan -var='environment={options.environment}'",
"cwd": "libs/infra/s3-bucket"
}
},
"apply": {
"executor": "nx:run-commands",
"options": {
"command": "terraform apply -auto-approve -var='environment={options.environment}'",
"cwd": "libs/infra/s3-bucket"
}
},
"destroy": {
"executor": "nx:run-commands",
"options": {
"command": "terraform destroy -auto-approve -var='environment={options.environment}'",
"cwd": "libs/infra/s3-bucket"
}
}
},
"tags": ["scope:infra", "type:terraform"]
}
Step 4: Configure my-frontend-admin (from previous example) to implicitly depend on infra-s3-bucket.
This ensures that if infra-s3-bucket changes, my-frontend-admin is considered “affected” in the context of certain operations.
apps/my-frontend-admin/project.json (partial):
{
"name": "my-frontend-admin",
// ...
"implicitDependencies": ["infra-s3-bucket"], // <--- Add this
"targets": {
"build": { /* ... */ },
"deploy-s3": { /* ... */ }
},
"tags": ["type:app", "scope:admin"]
}
Run npm install in the root.
Step 5: Initialize and apply Terraform locally.
First, you need to initialize Terraform. It’s good practice to use a local backend (e.g., a file) for development.
npx nx init infra-s3-bucket --backendConfigPath="local-backend.tfstate"
Note: The init command’s backendConfigPath isn’t fully reflected by the cwd option above for a simple nx:run-commands executor directly. For a real setup, you’d have a backend.tf or define it explicitly in the init command.
For local execution with a simple file backend, you might just run terraform init inside the library:
cd libs/infra/s3-bucket
terraform init
cd - # Go back to workspace root
Now, apply the Terraform configuration:
npx nx apply infra-s3-bucket --environment=dev
Expected Output:
Terraform will show a plan and then prompt for confirmation (unless -auto-approve is used, as configured in our executor).
NX Running target apply for project infra-s3-bucket...
Terraform will perform the following actions:
# aws_s3_bucket.my_tf_bucket will be created
+ resource "aws_s3_bucket" "my_tf_bucket" {
+ acl = "private"
+ arn = (known after apply)
+ bucket = "nx-monorepo-tf-bucket-dev"
+ bucket_domain_name = (known after apply)
+ bucket_prefix = (known after apply)
+ bucket_regional_domain_name = (known after apply)
+ force_destroy = false
+ id = (known after apply)
+ region = (known after apply)
+ tags = {
+ "Environment" = "dev"
+ "ManagedBy" = "NxTerraform"
}
+ tags_all = {
+ "Environment" = "dev"
+ "ManagedBy" = "NxTerraform"
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you really want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_s3_bucket.my_tf_bucket: Creating...
aws_s3_bucket.my_tf_bucket: Creation complete after ...s [id=nx-monorepo-tf-bucket-dev]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
NX Successfully ran target apply for project infra-s3-bucket
You should now have an S3 bucket named nx-monorepo-tf-bucket-dev in your AWS account.
Step 6: Demonstrate nx affected for infrastructure changes.
- Modify
libs/infra/s3-bucket/main.tfby adding a new tag or changing the ACL (e.g.,acl = "public-read"for testing, but typically don’t do this for sensitive buckets).# libs/infra/s3-bucket/main.tf resource "aws_s3_bucket" "my_tf_bucket" { bucket = "nx-monorepo-tf-bucket-${var.environment}" acl = "public-read" # <--- Change here # ... } - Run
npx nx affected --target=plan.Expected Output: Nx will detect thatnpx nx affected --target=planinfra-s3-bucketis affected and will run theplantarget for it.
This shows that Nx correctly identified the affected infrastructure project and ran theNX Running target plan for project infra-s3-bucket... Terraform will perform the following actions: # aws_s3_bucket.my_tf_bucket will be updated in-place ~ resource "aws_s3_bucket" "my_tf_bucket" { acl = "public-read" # (attr values remain unchanged) } Plan: 0 to add, 1 to change, 0 to destroy. NX Successfully ran target plan for project infra-s3-bucketplancommand. You could then integrate this into a CI workflow:# .github/workflows/infra-deploy.yml # ... - name: Plan affected infrastructure changes run: npx nx affected --target=plan --environment=prod # Pass environment as an option # Add a step to comment the plan on the PR for review - name: Apply affected infrastructure changes (manual approval for prod) if: github.ref == 'refs/heads/main' && contains(github.event.pull_request.labels.*.name, 'infra-approved') run: npx nx affected --target=apply --environment=prod # ...
This hands-on example demonstrates how to integrate Infrastructure-as-Code (Terraform) into an Nx monorepo, using custom executors and nx affected to manage and deploy infrastructure changes in a granular and controlled manner.
8. Bonus Section: Further Learning and Resources
To continue your journey as an Nx expert, here are some recommended resources for advanced topics, community insights, and cutting-edge developments.
Recommended Nx Talks/Conference Videos
- Nx Conf Presentations: Always a treasure trove of advanced topics. Search for “Nx Conf” on YouTube.
- “Advanced Module Federation Patterns”: Look for recent talks by Nx core team members or prominent community contributors on Module Federation, especially those covering dynamic remotes and enhanced runtime.
- “Distributed Task Execution at Scale”: Talks detailing how large organizations leverage Nx Cloud’s DTE for massive monorepos.
- “Custom Nx Plugins and Generators Deep Dive”: For extending Nx’s capabilities with your own automation.
- “Nx & AI: The Future of Monorepo Development”: Stay updated on the latest AI integrations and what’s coming next with the Nx Model Context Protocol (MCP).
Expert Blogs/Publications
- Nx Blog (
nx.dev/blog): The official Nx blog is the primary source for the latest features, architectural insights, and best practices directly from the Nx team. Keep an eye on “Making your LLM smarter” series for AI integration. - Victor Savkin’s Blog: Victor Savkin, one of the creators of Nx, often publishes in-depth articles on monorepo architecture, build systems, and advanced Nx concepts.
- Nrwl Engineering Blog: Other Nrwl team members often share valuable insights on various technical topics related to Nx.
- Community Articles on Medium, dev.to, etc.: Search for “Advanced Nx,” “Nx Module Federation,” “Nx Monorepo Security” to find articles by other experienced developers sharing their real-world solutions.
Open Source Nx Plugins
The Nx ecosystem thrives on community contributions. Exploring open-source plugins can reveal innovative approaches and specialized solutions.
@nx-go/nx-go: For integrating Go projects into your Nx monorepo.@nx-dotnet/core: For managing .NET applications and libraries within Nx.@nx-python/core: For Python projects.- Custom Deployment Plugins: Search GitHub/NPM for “nx plugin deploy ” (e.g.,
nx plugin deploy aws,nx plugin deploy azure) for specialized deployment executors. @e-square/nx-affected-matrixand@e-square/nx-distributed-task: GitHub Actions to distribute Nx jobs efficiently, complementing Nx Cloud.- Nx Community Slack/Discord: Engage with the community to discover new plugins and share your own.
Research Papers/RFCs
For those who want to delve into the theoretical underpinnings or upcoming features:
- Webpack Module Federation Documentation (
module-federation.io): The official documentation and RFCs for Webpack Module Federation, especially@module-federation/enhanced, provide deep technical details. - Nx RFCs/Proposals (Nx GitHub Repository): Keep an eye on the Nx GitHub repository for ongoing RFCs or discussions about future features and architectural changes.
- Model Context Protocol (MCP) Documentation (
modelcontextprotocol.io): For the technical specifications behind Nx’s AI integration.
Continuously engaging with these resources will keep your Nx expertise sharp and at the forefront of monorepo development.