React Native: Advanced Mastery Guide (Practical Edition)

React Native: Advanced Mastery Guide (Practical Edition)


This document builds upon foundational React Native knowledge, propelling you into expert-level application development. We will explore the cutting-edge aspects of React Native, focusing on architectural shifts, sophisticated state management, in-depth performance analysis, and robust deployment strategies, all illuminated with practical code examples.

1. Deep Dive into the New Architecture

React Native’s New Architecture fundamentally re-engineers how JavaScript communicates with native code, addressing long-standing performance bottlenecks. The core pillars are JSI (JavaScript Interface), TurboModules, Fabric Renderer, and Codegen.

1.1 Understanding JSI (JavaScript Interface)

JSI enables direct and synchronous interaction between JavaScript and native code. It replaces the traditional asynchronous JavaScript bridge, drastically improving performance and responsiveness.

Practical Example: Exposing a C++ Host Object via JSI

This example demonstrates how a C++ HostObject can be created and exposed to JavaScript via JSI, allowing synchronous method calls. This is a low-level example illustrating the JSI mechanism, typically abstracted away by TurboModules.

Conceptual C++ (JSI) Implementation:

Let’s assume you have a native C++ module that calculates complex Fibonacci numbers. You want to expose a fibonacci(n) method to JavaScript synchronously.

// Common/NativeAwesomeFibonacci.h
#pragma once

#include <jsi/jsi.h>
#include <ReactCommon/TurboModule.h> // Though it's JSI, TurboModule headers are useful

namespace MyApp {

// Define the HostObject class
class JSI_EXPORT FibonacciHostObject : public facebook::jsi::HostObject {
public:
    FibonacciHostObject(facebook::jsi::Runtime& runtime);

    // This method will be called when JavaScript tries to access a property
    facebook::jsi::Value get(facebook::jsi::Runtime& runtime, const facebook::jsi::PropNameID& propNameId) override;

private:
    long long calculateFibonacci(int n); // Native C++ calculation
    facebook::jsi::Runtime& m_runtime;
};

// Function to install the HostObject into the JavaScript runtime
void installFibonacciModule(facebook::jsi::Runtime& runtime);

} // namespace MyApp
// Common/NativeAwesomeFibonacci.cpp
#include "NativeAwesomeFibonacci.h"
#include <iostream> // For logging

namespace MyApp {

long long FibonacciHostObject::calculateFibonacci(int n) {
    if (n <= 1) return n;
    long long a = 0, b = 1;
    for (int i = 2; i <= n; ++i) {
        long long next = a + b;
        a = b;
        b = next;
    }
    return b;
}

FibonacciHostObject::FibonacciHostObject(facebook::jsi::Runtime& runtime) : m_runtime(runtime) {}

facebook::jsi::Value FibonacciHostObject::get(facebook::jsi::Runtime& runtime, const facebook::jsi::PropNameID& propNameId) {
    auto name = propNameId.utf8(runtime);

    if (name == "fibonacci") {
        return facebook::jsi::Function::createFromHostFunction(runtime,
            propNameId,
            1, // number of arguments
            [this](facebook::jsi::Runtime& rt,
                   const facebook::jsi::Value& thisVal,
                   const facebook::jsi::Value* args,
                   size_t count) -> facebook::jsi::Value {
                if (count != 1 || !args[0].isNumber()) {
                    throw facebook::jsi::JSError(rt, "Expected one number argument for fibonacci.");
                }
                int n = args[0].asNumber();
                if (n < 0 || n > 90) { // Limit for long long
                    throw facebook::jsi::JSError(rt, "Fibonacci input must be between 0 and 90.");
                }
                long long result = calculateFibonacci(n);
                std::cout << "Calculated fibonacci(" << n << ") = " << result << std::endl;
                return facebook::jsi::Value((double)result);
            });
    }

    return facebook::jsi::Value::undefined();
}

void installFibonacciModule(facebook::jsi::Runtime& runtime) {
    auto instance = std::make_shared<FibonacciHostObject>(runtime);
    auto object = facebook::jsi::Object::createFromHostObject(runtime, instance);
    runtime.global().setProperty(runtime, "__MyAppFibonacciNative", std::move(object));
    std::cout << "Fibonacci JSI module installed." << std::endl;
}

} // namespace MyApp

Integrating JSI Module into React Native (e.g., in NativeModules.cpp on Android/iOS):

In your native module initialization (e.g., MainApplication.cpp for C++ setup on Android or the relevant RCTBridge delegate for iOS if bridging manually to C++), you would install this JSI module:

// Example for Android's NativeModules.cpp or a custom JSI initialization
#include "NativeAwesomeFibonacci.h" // Your JSI module header

extern "C" JSI_EXPORT void jsiInstall(facebook::jsi::Runtime &runtime) {
    // Other JSI modules might be installed here by default
    MyApp::installFibonacciModule(runtime);
}

JavaScript Usage (React Native):

Once the C++ module is installed, you can access it directly from JavaScript.

import React, { useEffect, useState } from 'react';
import { View, Text, Button, StyleSheet, Alert } from 'react-native';

// Declare the global JSI object (TypeScript would require declaration merging)
declare global {
  var __MyAppFibonacciNative: {
    fibonacci(n: number): number;
  };
}

const JSIExample = () => {
  const [fibResult, setFibResult] = useState(null);
  const [inputNum, setInputNum] = useState(10); // Example input

  const calculateWithJSI = () => {
    if (typeof global.__MyAppFibonacciNative !== 'undefined' &&
        typeof global.__MyAppFibonacciNative.fibonacci === 'function') {
      try {
        const result = global.__MyAppFibonacciNative.fibonacci(inputNum);
        setFibResult(result);
        console.log(`Fibonacci(${inputNum}) calculated via JSI: ${result}`);
      } catch (e) {
        Alert.alert('JSI Error', e.message);
        console.error('JSI Fibonacci calculation failed:', e);
      }
    } else {
      Alert.alert('Error', 'JSI Fibonacci module not found or not correctly installed.');
    }
  };

  useEffect(() => {
    // Initial calculation or to re-run when inputNum changes
    calculateWithJSI();
  }, [inputNum]);

  return (
    <View style={styles.container}>
      <Text style={styles.title}>JSI Fibonacci Calculator</Text>
      <Text style={styles.inputLabel}>Input N (for Fibonacci): {inputNum}</Text>
      <Button title="Calculate Fib(10)" onPress={() => setInputNum(10)} />
      <Button title="Calculate Fib(20)" onPress={() => setInputNum(20)} />
      <Button title="Calculate Fib(50)" onPress={() => setInputNum(50)} />

      {fibResult !== null && (
        <Text style={styles.resultText}>Fibonacci({inputNum}) = {fibResult}</Text>
      )}
      <Text style={styles.noteText}>
        (This is a conceptual example for JSI directly; TurboModules abstract this away.)
      </Text>
    </View>
  );
};

const styles = StyleSheet.create({
  container: {
    flex: 1,
    justifyContent: 'center',
    alignItems: 'center',
    padding: 20,
    backgroundColor: '#e8f5e9',
  },
  title: {
    fontSize: 22,
    fontWeight: 'bold',
    marginBottom: 20,
  },
  inputLabel: {
    fontSize: 16,
    marginVertical: 10,
  },
  resultText: {
    fontSize: 20,
    fontWeight: 'bold',
    marginTop: 30,
    color: '#388e3c',
  },
  noteText: {
    fontSize: 12,
    color: '#777',
    marginTop: 20,
    textAlign: 'center',
  },
});

export default JSIExample;

1.2 TurboModules: The Evolution of Native Modules

TurboModules use JSI to provide a more efficient and type-safe way to interact with native code. Codegen automatically generates much of the boilerplate.

Practical Example: Creating a TurboModule for a Custom Calendar Utility

Let’s create a TurboModule that exposes a native method to get the current date as a formatted string and another to add two numbers (a simple placeholder for more complex native logic).

1. Define the Module Interface (TypeScript js/NativeCalendarModule.ts):

import type { TurboModule } from 'react-native';
import { TurboModuleRegistry } from 'react-native';

export interface Spec extends TurboModule {
  // Method to get current date, will be asynchronous
  getSystemDate(): Promise<string>;
  // Synchronous method example (JSI allows this)
  addNumbers(a: number, b: number): number;
  // Void method example
  showNativeToast(message: string): void;
}

// Ensure the module name matches the native implementation
export default TurboModuleRegistry.getEnforcing<Spec>('NativeCalendarModule');

2. Configure Codegen (package.json):

In your library’s package.json (if building a reusable TurboModule) or your app’s package.json (if it’s an app-specific module):

{
  "name": "my-react-native-app",
  "version": "0.0.1",
  "private": true,
  "scripts": {
    "start": "react-native start",
    "android": "react-native run-android",
    "ios": "react-native run-ios",
    "test": "jest",
    "codegen": "./node_modules/.bin/react-native codegen-native-modules --p ./"
  },
  "dependencies": {
    "react": "18.2.0",
    "react-native": "0.76.0" // Or your target RN version
  },
  "codegenConfig": {
    "name": "NativeCalendarSpec",
    "type": "modules",
    "jsSrcsDir": "js",
    "android": {
      "javaPackageName": "com.myreactnativeapp"
    }
  }
}

Run npm run codegen or yarn codegen after defining the spec and configuring codegenConfig. This generates native interface files (e.g., RTNCenteredTextSpec.h etc.).

3. Implement Native Code (iOS Swift):

Create ios/NativeCalendarModule.h and ios/NativeCalendarModule.mm files. ios/NativeCalendarModule.h:

#import <React/RCTBridgeModule.h>
#import <React/RCTTurboModule.h> // For TurboModule support

@interface NativeCalendarModule : NSObject <RCTBridgeModule, RCTTurboModule>
@end

ios/NativeCalendarModule.mm (Objective-C++ for bridging Swift): You’d typically implement the logic in Swift, then bridge it using Objective-C++.

#import "NativeCalendarModule.h"
#import <React/RCTLog.h> // For RCTLog
#import <UIKit/UIKit.h> // For UIAlertController (toast equivalent)

// Swift bridge to Objective-C++
// Place your Swift code in NativeCalendarModuleImpl.swift
@interface RCT_EXTERN_MODULE(NativeCalendarModuleImpl, NSObject)
RCT_EXTERN_METHOD(getSystemDate:(RCTPromiseResolveBlock)resolve reject:(RCTPromiseRejectBlock)reject)
RCT_EXTERN_METHOD(addNumbers:(double)a b:(double)b resolver:(RCTPromiseResolveBlock)resolve reject:(RCTPromiseRejectBlock)reject)
RCT_EXTERN_METHOD(showNativeToast:(NSString *)message)
@end


@implementation NativeCalendarModule

RCT_EXPORT_MODULE(NativeCalendarModule);

// Expose the Swift methods
- (std::shared_ptr<facebook::react::TurboModule>)getTurboModule:(const facebook::react::ObjCTurboModule::InitParams &params) {
    return std::make_shared<facebook::react::NativeCalendarModuleSpecJSI>(params);
}

// These methods are implemented in Swift and exposed via RCT_EXTERN_METHOD
// For TurboModules, these are primarily for the spec definition and not direct calls here.
// The JSI layer created by codegen directly invokes the Swift/Kotlin methods.

// Old bridge method style (for comparison, not used by TurboModule direct calls)
// RCT_EXPORT_METHOD(getSystemDate:(RCTPromiseResolveBlock)resolve reject:(RCTPromiseRejectBlock)reject) {
//     NSDateFormatter *formatter = [[NSDateFormatter alloc] init];
//     [formatter setDateFormat:@"yyyy-MM-dd HH:mm:ss"];
//     NSString *dateString = [formatter stringFromDate:[NSDate date]];
//     resolve(dateString);
// }

// RCT_EXPORT_METHOD(addNumbers:(double)a b:(double)b resolve:(RCTPromiseResolveBlock)resolve reject:(RCTPromiseRejectBlock)reject) {
//     resolve(@(a + b));
// }

@end

ios/NativeCalendarModuleImpl.swift (Actual Logic in Swift): You might need to create a new Swift file for this. Make sure your Bridging Header is set up.

import Foundation
import React
import UIKit // For Toast

@objc(NativeCalendarModuleImpl)
class NativeCalendarModuleImpl: NSObject {

  @objc(getSystemDate:reject:)
  func getSystemDate(resolve: @escaping RCTPromiseResolveBlock, reject: @escaping RCTPromiseRejectBlock) {
    let dateFormatter = DateFormatter()
    dateFormatter.dateFormat = "yyyy-MM-dd HH:mm:ss"
    let dateString = dateFormatter.string(from: Date())
    resolve(dateString)
  }

  @objc(addNumbers:b:resolver:rejecter:)
  func addNumbers(a: Double, b: Double, resolve: @escaping RCTPromiseResolveBlock, reject: @escaping RCTPromiseRejectBlock) {
    resolve(a + b)
  }

  @objc(showNativeToast:)
  func showNativeToast(message: String) {
      DispatchQueue.main.async {
          let alert = UIAlertController(title: nil, message: message, preferredStyle: .alert)
          // To make it behave like a toast, we dismiss it automatically after a short delay
          alert.view.backgroundColor = UIColor.black.withAlphaComponent(0.6)
          alert.view.alpha = 1.0
          alert.view.layer.cornerRadius = 15
          
          if let window = UIApplication.shared.windows.first {
              window.rootViewController?.present(alert, animated: true) {
                  DispatchQueue.main.asyncAfter(deadline: .now() + 2.0) {
                      alert.dismiss(animated: true)
                  }
              }
          }
      }
  }

  // Mandatory: Declare which queue this module should run on
  @objc
  static func requiresMainQueueSetup() -> Bool {
    return true
  }
}

4. Implement Native Code (Android Kotlin):

Create android/app/src/main/java/com/myreactnativeapp/NativeCalendarModule.kt.

package com.myreactnativeapp

import com.facebook.react.bridge.NativeModule
import com.facebook.react.bridge.ReactApplicationContext
import com.facebook.react.bridge.ReactContextBaseJavaModule
import com.facebook.react.bridge.ReactMethod
import com.facebook.react.bridge.Promise
import com.facebook.react.module.annotations.ReactModule
import android.widget.Toast // For Toast
import java.text.SimpleDateFormat
import java.util.Date
import java.util.Locale

// Extends the generated spec (e.g., NativeCalendarModuleSpec from build/generated/source/codegen)
@ReactModule(name = NativeCalendarModule.NAME)
class NativeCalendarModule(reactContext: ReactApplicationContext) :
    NativeCalendarModuleSpec(reactContext) { // Replace NativeCalendarModuleSpec with your generated spec name
    
    companion object {
        const val NAME = "NativeCalendarModule"
    }

    override fun getName(): String {
        return NAME
    }

    // Corresponds to getSystemDate(): Promise<string>
    @ReactMethod
    override fun getSystemDate(promise: Promise) {
        try {
            val dateFormat = SimpleDateFormat("yyyy-MM-dd HH:mm:ss", Locale.getDefault())
            val currentDate = dateFormat.format(Date())
            promise.resolve(currentDate)
        } catch (e: Exception) {
            promise.reject("DATE_ERROR", "Failed to get system date", e)
        }
    }

    // Corresponds to addNumbers(a: number, b: number): number
    @ReactMethod(isBlockingSynchronousMethod = true) // Marks as synchronous for JSI
    override fun addNumbers(a: Double, b: Double): Double {
        return a + b
    }

    // Corresponds to showNativeToast(message: string): void
    @ReactMethod
    override fun showNativeToast(message: String) {
        Toast.makeText(reactApplicationContext, message, Toast.LENGTH_SHORT).show()
    }
}

5. Register the Module (Android MainApplication.kt):

Ensure your MainApplication.kt registers the new module.

package com.myreactnativeapp

import android.app.Application
import com.facebook.react.PackageList
import com.facebook.react.ReactApplication
import com.facebook.react.ReactHost
import com.facebook.react.ReactNativeHost
import com.facebook.react.ReactPackage
import com.facebook.react.bridge.JSIModulePackage
import com.facebook.react.bridge.JSIModuleProvider
import com.facebook.react.bridge.JSIModuleSpec
import com.facebook.react.bridge.JavaScriptContextHolder
import com.facebook.react.bridge.NativeModule
import com.facebook.react.bridge.ReactApplicationContext
import com.facebook.react.common.annotations.UnstableReactNativeAPI
import com.facebook.react.defaults.DefaultNewArchitectureEntryPoint
import com.facebook.react.defaults.DefaultReactNativeHost
import com.facebook.react.turbomodule.core.TurboModuleManager
import com.facebook.soloader.SoLoader

import java.lang.reflect.InvocationTargetException
import com.myreactnativeapp.NativeCalendarModule // Import your module

class MainApplication : Application(), ReactApplication {

  override val reactNativeHost: ReactNativeHost =
      object : DefaultReactNativeHost(this) {
        override fun getPackages(): List<ReactPackage> =
            PackageList(this).packages.apply {
              // Add your custom TurboModule package here.
              // For new architecture, you typically don't explicitly add TurboModule classes here if using autolinking and codegen correctly.
              // If you're building a library, the package will be discovered by autolinking.
              // For app-specific modules, you might add a simple package that provides your module.
              add(object : ReactPackage {
                  override fun createNativeModules(reactContext: ReactApplicationContext): List<NativeModule> {
                      return listOf(NativeCalendarModule(reactContext))
                  }

                  override fun createViewManagers(reactContext: ReactApplicationContext): List<com.facebook.react.uimanager.ViewManager<*, *>> {
                      return emptyList()
                  }
              })
            }

        override fun getJSMainModuleName(): String = "index"

        override fun getUseDeveloperSupport(): Boolean = BuildConfig.DEBUG

        override val is \\": Boolean = DefaultNewArchitectureEntryPoint.isNewArchitectureEnabled

        override val is\\\\\": Boolean = DefaultNewArchitectureEntryPoint.isFabricEnabled
      }

  override val reactHost: ReactHost
    get() = DefaultNewArchitectureEntryPoint.getReactHost(this, reactNativeHost)

  override fun onCreate() {
    super.onCreate()
    SoLoader.init(this, false)
    if (DefaultNewArchitectureEntryPoint.canBe-----------
// In your App.js or a screen component:
import React, { useState } from 'react';
import { View, Text, Button, StyleSheet, Alert } from 'react-native';
import NativeCalendarModule from '../js/NativeCalendarModule'; // Path to your TypeScript spec

const TurboModuleExample = () => {
  const [currentDate, setCurrentDate] = useState('Loading...');
  const [sumResult, setSumResult] = useState(0);

  const fetchDate = async () => {
    try {
      const date = await NativeCalendarModule.getSystemDate();
      setCurrentDate(date);
    } catch (e) {
      Alert.alert('Error', `Failed to get date: ${e.message}`);
      console.error(e);
      setCurrentDate('Error fetching date');
    }
  };

  const calculateSum = () => {
    try {
      const result = NativeCalendarModule.addNumbers(15, 27); // Synchronous call
      setSumResult(result);
      Alert.alert('Result', `Synchronous sum of 15 + 27 = ${result}`);
    } catch (e) {
      Alert.alert('Error', `Failed to add numbers: ${e.message}`);
      console.error(e);
    }
  };

  const showToast = () => {
    NativeCalendarModule.showNativeToast('Hello from TurboModule!');
  };

  React.useEffect(() => {
    fetchDate();
  }, []);

  return (
    <View style={styles.container}>
      <Text style={styles.title}>TurboModule Demonstrations</Text>
      <View style={styles.section}>
        <Text style={styles.label}>Current System Date (Async):</Text>
        <Text style={styles.value}>{currentDate}</Text>
        <Button title="Refresh Date" onPress={fetchDate} />
      </View>

      <View style={styles.section}>
        <Text style={styles.label}>Synchronous Addition (15 + 27):</Text>
        <Text style={styles.value}>{sumResult}</Text>
        <Button title="Calculate Sum" onPress={calculateSum} />
      </View>

      <View style={styles.section}>
        <Text style={styles.label}>Native Toast Message:</Text>
        <Button title="Show Native Toast" onPress={showToast} />
      </View>
    </View>
  );
};

const styles = StyleSheet.create({
  container: {
    flex: 1,
    justifyContent: 'center',
    alignItems: 'center',
    padding: 20,
    backgroundColor: '#e3f2fd',
  },
  title: {
    fontSize: 24,
    fontWeight: 'bold',
    marginBottom: 30,
    color: '#1a237e',
  },
  section: {
    width: '100%',
    marginBottom: 25,
    padding: 15,
    backgroundColor: '#fff',
    borderRadius: 10,
    shadowColor: '#000',
    shadowOffset: { width: 0, height: 2 },
    shadowOpacity: 0.1,
    shadowRadius: 3.84,
    elevation: 5,
    alignItems: 'center',
  },
  label: {
    fontSize: 16,
    marginBottom: 8,
    color: '#424242',
  },
  value: {
    fontSize: 18,
    fontWeight: '600',
    color: '#2196f3',
    marginBottom: 15,
  },
});

export default TurboModuleExample;

1.3 Fabric Renderer: Concurrent and Synchronous UI

Fabric is the re-implementation of React Native’s UI rendering system for asynchronous and concurrent rendering. It is largely an internal change, but affects how you think about UI updates and layout.

Practical Example: useLayoutEffect for Synchronous Layout Changes

While Fabric primarily operates under the hood, useLayoutEffect becomes more aligned with its synchronous layout capabilities, ensuring that layout changes are applied before the browser paint, preventing visual “flickers.”

import React, { useState, useLayoutEffect, useRef } from 'react';
import { View, Text, StyleSheet, TouchableOpacity, Dimensions } from 'react-native';

const { width } = Dimensions.get('window');

const FabricLayoutExample = () => {
  const [showTooltip, setShowTooltip] = useState(false);
  const [tooltipStyle, setTooltipStyle] = useState({});
  const buttonRef = useRef(null);

  const toggleTooltip = () => {
    setShowTooltip(!showTooltip);
  };

  useLayoutEffect(() => {
    if (showTooltip && buttonRef.current) {
      buttonRef.current.measure((fx, fy, width, height, px, py) => {
        // Calculate tooltip position to be above the button and centered
        const tooltipWidth = 150; // Arbitrary tooltip width
        const left = px + (width / 2) - (tooltipWidth / 2);
        const top = py - 40; // Position 40px above the button

        setTooltipStyle({
          position: 'absolute',
          left: Math.max(0, Math.min(left, Dimensions.get('window').width - tooltipWidth)), // Clamp to screen bounds
          top: Math.max(0, top), // Clamp to screen bounds
          width: tooltipWidth,
        });
      });
    } else {
      setTooltipStyle({}); // Reset when tooltip is hidden
    }
  }, [showTooltip]); // Recalculate when showTooltip changes

  return (
    <View style={styles.container}>
      <Text style={styles.title}>Fabric Layout Sync with useLayoutEffect</Text>
      <TouchableOpacity
        ref={buttonRef}
        style={styles.button}
        onPress={toggleTooltip}
        activeOpacity={0.7}
      >
        <Text style={styles.buttonText}>{showTooltip ? 'Hide Tooltip' : 'Show Tooltip'}</Text>
      </TouchableOpacity>

      {showTooltip && (
        <View style={[styles.tooltip, tooltipStyle]}>
          <Text style={styles.tooltipText}>This is a tooltip!</Text>
          <View style={styles.arrow} />
        </View>
      )}

      <Text style={styles.noteText}>
        `useLayoutEffect` ensures layout measurements and updates happen synchronously before screen
        paint, preventing visual glitches often seen with `useEffect` for position-critical UI.
      </Text>
    </View>
  );
};

const styles = StyleSheet.create({
  container: {
    flex: 1,
    justifyContent: 'center',
    alignItems: 'center',
    padding: 20,
    backgroundColor: '#fffde7',
  },
  title: {
    fontSize: 20,
    fontWeight: 'bold',
    marginBottom: 50,
  },
  button: {
    backgroundColor: '#ffb300',
    paddingVertical: 12,
    paddingHorizontal: 25,
    borderRadius: 8,
    marginBottom: 20,
    zIndex: 1, // Ensure button is above tooltip on z-index layer for interaction
  },
  buttonText: {
    color: '#fff',
    fontSize: 16,
    fontWeight: 'bold',
  },
  tooltip: {
    backgroundColor: '#333',
    padding: 10,
    borderRadius: 5,
    alignItems: 'center',
    justifyContent: 'center',
    zIndex: 10, // Ensure tooltip is above other content
  },
  tooltipText: {
    color: '#fff',
    fontSize: 14,
  },
  arrow: {
    position: 'absolute',
    bottom: -10, // Position below the tooltip
    width: 0,
    height: 0,
    borderLeftWidth: 10,
    borderRightWidth: 10,
    borderTopWidth: 10,
    borderStyle: 'solid',
    backgroundColor: 'transparent',
    borderLeftColor: 'transparent',
    borderRightColor: 'transparent',
    borderTopColor: '#333',
  },
  noteText: {
    fontSize: 12,
    color: '#777',
    marginTop: 80, // Give some space
    textAlign: 'center',
  },
});

export default FabricLayoutExample;

1.4 Codegen: Type Safety and Reduced Boilerplate

Codegen automatically generates type-safe native bindings from a single JavaScript/TypeScript specification, simplifying native module development.

Practical Example: Codegen’s Generated Files (Conceptual)

You don’t directly write the Codegen output, but it’s crucial to understand what it creates. When you define a TurboModule spec in TypeScript (js/NativeCalendarModule.ts from above) and run Codegen, it generates files like these:

  • iOS (Objective-C++ headers):

    • NativeCalendarModule-generated.h: Contains Objective-C interfaces based on your spec, like RCT_EXTERN_MODULE and RCT_EXTERN_METHOD declarations.
    • NativeCalendarModuleSpecJSI.h / NativeCalendarModuleSpecJSI.mm: C++ templates and JSI-specific code to bridge your JavaScript spec to the native C++ TurboModule implementation. These files define the getTurboModule method and NativeCalendarModuleSpecJSI class which your native implementation extends.
  • Android (Java interfaces):

    • NativeCalendarModuleSpec.java: Contains a Java interface that your Kotlin/Java native module must implement, ensuring method signatures match your TypeScript spec. It will define methods like getSystemDate(Promise promise) and addNumbers(double a, double b).
    • NativeCalendarModuleImpl.java: A generated base class that your module might extend for default behaviors.

Visualizing the Impact: Without Codegen, you’d manually write all these native interface definitions, which are prone to errors and require tedious maintenance to keep in sync with JavaScript changes. Codegen automates this, ensuring type consistency and reducing boilerplate.

Key takeaway: The practical example for Codegen is effectively the NativeCalendarModule.ts spec coupled with the native code in Swift/Kotlin that implements the interfaces Codegen automatically generates. The “generated files” are what you get when you run the codegen script, and you then write your native code to match them.

// There is no direct "Codegen" component code to show in React Native,
// as Codegen is a build-time tool that generates native code based on your JS/TS spec.
// The effect is seen in how smoothly your JS code (from NativeCalendarModule.ts)
// interacts with native code (NativeCalendarModule.swift/kt) thanks to the generated bridges.

import React from 'react';
import { View, Text, StyleSheet } from 'react-native';

const CodegenImpact = () => {
  return (
    <View style={styles.container}>
      <Text style={styles.title}>Codegen: The Builder of Bridges</Text>
      <Text style={styles.description}>
        Codegen is a build-time tool that automates the creation of native
        interfaces (like `NativeCalendarModuleSpec.java` for Android or
        `NativeCalendarModuleSpecJSI.h` for iOS) from your TypeScript module
        spec (`js/NativeCalendarModule.ts`).
      </Text>
      <Text style={styles.example}>
        It ensures type safety between JavaScript and native code, drastically
        reducing manual boilerplate and potential errors. You implement your
        native logic (e.g., in Kotlin/Swift) according to these generated interfaces.
      </Text>
      <Text style={styles.footnote}>
        (This component visually explains Codegen's role rather than showing code you'd write for it.)
      </Text>
    </View>
  );
};

const styles = StyleSheet.create({
  container: {
    flex: 1,
    justifyContent: 'center',
    alignItems: 'center',
    padding: 20,
    backgroundColor: '#ffebee',
  },
  title: {
    fontSize: 22,
    fontWeight: 'bold',
    marginBottom: 20,
    color: '#b71c1c',
  },
  description: {
    fontSize: 16,
    textAlign: 'center',
    marginBottom: 15,
    color: '#424242',
  },
  example: {
    fontSize: 14,
    fontStyle: 'italic',
    textAlign: 'center',
    marginHorizontal: 10,
    color: '#616161',
  },
  footnote: {
    fontSize: 12,
    color: '#9e9e9e',
    marginTop: 40,
    textAlign: 'center',
  },
});

export default CodegenImpact;

1.5 Migrating to the New Architecture

Migrating to the New Architecture is a phased process, often starting with enabling it in your project’s android/gradle.properties and ios/Podfile, and then resolving dependencies.

Practical Example: Podfile Changes for New Architecture

When migrating an iOS project, you’ll update your Podfile to enable Fabric and TurboModules.

Original Podfile (Legacy Architecture):

# Podfile for React Native 0.70-
# ...
platform :ios, '12.0'
# ...
require_relative '../node_modules/react-native/scripts/react_native_postinstall'
# ...

Podfile with New Architecture Enabled:

# Podfile for React Native 0.76+ with New Architecture
# ...
platform :ios, '13.4' # Minimum iOS version for New Architecture is generally higher

require_relative '../node_modules/react-native/scripts/react_native_pods'
require_relative '../node_modules/@react-native/community-cli-plugin/src/bin/ios-install-third-party-pods'

# ... (your app targets)

target 'YourAppName' do
  # Pods for YourAppName
  config = use_native_modules!
  use_react_native!(:path => config["reactNativePath"])

  # --- Start React Native New Architecture ---
  # Remove the line below if you don't want to use Fabric.
  # If you are not using Fabric, then you need to revert the changes to 'AppDelegate.mm' file.
  $ReactFeatureFlags = {
    # If Flipper is not needed, you can remove this
    # 'enableFabricTarget' => true, # This is deprecated. Fabric is enabled via `new_architecture_enabled`
  }

  # Enable the new architecture: Fabric (UI), TurboModules (Native Modules), Codegen
  # NOTE: THIS IS CRITICAL FOR NEW ARCHITECTURE
  new_architecture_enabled = ENV['RCT_NEW_ARCH_ENABLED'] == '1'
  use_frameworks! :linkage => :static

  # Required for autolinking with new architecture (e.g., TurboModules)
  # This section helps integrate autolinked packages with the New Architecture.
  # Adjust as per your specific autolinking configuration.
  Pod::UI.warn(
    "[!] Your project should opt-in to new architecture when " \
    "'RCT_NEW_ARCH_ENABLED' env var is set to 1. " \
    "Please enable the new architecture by appending " \
    "':new_architecture => true' to your `use_react_native!` call."
  ) unless new_architecture_enabled

  # Your project's own pods for your app's code
  # pod 'MyCustomSwiftModule', :path => '../modules/MyCustomSwiftModule'

  # post_install do |installer|
  #   react_native_post_install(installer,
  #     :fabric_enabled => new_architecture_enabled # This might be auto-managed now
  #   )
  # end

  # Add this if you have local pods (e.g., a local TurboModule library)
  # pod 'MyLocalTurboModule', :path => '../../path/to/my/local-turbomodule'

  # Add this after all other pods for optimal new architecture integration
  # This part often involves generated pods for TurboModules and Fabric
  # It is often managed by `use_react_native!` and `install_third_party_pods`
  #
  # Manually enabling specific Fabric components if needed, e.g.,
  # pod 'RCTFabric' # Core Fabric renderer
  # pod 'ReactCommon/Fabric/CxxReact' # C++ Bridge for Fabric

  # When enabling new architecture, make sure your app's `AppDelegate.mm`
  # and other native files are updated as per the React Native Upgrade Helper.
end

Key Commands for Migration:

  1. Check Environment:
    npx react-native info
    
  2. Enable New Architecture (typically by setting an environment variable):
    # For iOS:
    RCT_NEW_ARCH_ENABLED=1 pod install --project-directory=ios
    # For Android:
    cd android && ./gradlew clean && ./gradlew assembleRelease -DnewArchEnabled=true
    
  3. Upgrade React Native CLI:
    npm install -g react-native@latest
    
  4. Use react-native upgrade --skip-install: This updates project files, then manually update Podfile and gradle.properties based on the latest React Native documentation and the Upgrade Helper tool.

2. Advanced State Management (XState)

XState helps manage complex application logic using state machines and statecharts, making behavior explicit, predictable, and testable.

2.1 Introduction to State Machines and Statecharts

Practical Example: Simple Toggle Switch FSM

A basic FSM to manage the state of a toggle switch (on or off).

// src/machines/toggleMachine.js
import { createMachine } from 'xstate';

const toggleMachine = createMachine({
  id: 'toggle',
  initial: 'inactive', // The initial state when the machine starts
  states: {
    inactive: {
      on: { TOGGLE: 'active' }, // If in 'inactive' state, on 'TOGGLE' event, go to 'active'
    },
    active: {
      on: { TOGGLE: 'inactive' }, // If in 'active' state, on 'TOGGLE' event, go to 'inactive'
    },
  },
});

export default toggleMachine;
// src/components/ToggleSwitch.js
import React from 'react';
import { View, Text, Switch, StyleSheet } from 'react-native';
import { useMachine } from '@xstate/react';
import toggleMachine from '../machines/toggleMachine';

const ToggleSwitch = () => {
  const [current, send] = useMachine(toggleMachine);

  return (
    <View style={styles.container}>
      <Text style={styles.label}>Switch Status: </Text>
      <Text style={[styles.statusText, current.matches('active') ? styles.active : styles.inactive]}>
        {current.value.toUpperCase()}
      </Text>
      <Switch
        onValueChange={() => send('TOGGLE')} // Send the 'TOGGLE' event to the machine
        value={current.matches('active')}
        trackColor={{ false: "#767577", true: "#81b0ff" }}
        thumbColor={current.matches('active') ? "#f5dd4b" : "#f4f3f4"}
        style={styles.switch}
      />
    </View>
  );
};

const styles = StyleSheet.create({
  container: {
    flexDirection: 'row',
    alignItems: 'center',
    marginBottom: 20,
    padding: 15,
    backgroundColor: '#fff',
    borderRadius: 8,
    shadowColor: '#000',
    shadowOffset: { width: 0, height: 1 },
    shadowOpacity: 0.2,
    shadowRadius: 1.41,
    elevation: 2,
  },
  label: {
    fontSize: 18,
    marginRight: 10,
  },
  statusText: {
    fontSize: 18,
    fontWeight: 'bold',
    marginRight: 15,
  },
  active: {
    color: 'green',
  },
  inactive: {
    color: 'red',
  },
  switch: {
    transform: [{ scaleX: 1.2 }, { scaleY: 1.2 }], // Make switch larger
  },
});

export default ToggleSwitch;

2.2 Why XState for React Native?

XState brings predictability and robustness to complex logic.

Practical Example: Data Fetching with XState - Handling Loading, Success, Error

This state machine handles a data fetching process, explicitly managing idle, loading, success, and error states.

// src/machines/dataFetchMachine.js
import { createMachine, assign } from 'xstate';

const dataFetchMachine = createMachine({
  id: 'dataFetcher',
  initial: 'idle',
  context: {
    data: undefined,
    errorMessage: undefined,
  },
  states: {
    idle: {
      on: { FETCH: 'loading' },
    },
    loading: {
      invoke: {
        id: 'fetchData',
        src: async (context, event) => {
          // Simulate API call
          return new Promise((resolve, reject) => {
            setTimeout(() => {
              if (Math.random() > 0.3) { // 70% chance of success
                resolve({ message: "Data fetched successfully!", timestamp: new Date().toISOString() });
              } else {
                reject("Failed to fetch data from the server.");
              }
            }, 2000);
          });
        },
        onDone: {
          target: 'success',
          actions: assign({
            data: (context, event) => event.output,
            errorMessage: undefined,
          }),
        },
        onError: {
          target: 'error',
          actions: assign({
            data: undefined,
            errorMessage: (context, event) => event.error,
          }),
        },
      },
      on: {
        CANCEL: 'idle', // Allow cancelling fetch
      },
    },
    success: {
      on: { REFATCH: 'loading' }, // Can refetch from success state
    },
    error: {
      on: { RETRY: 'loading' }, // Can retry from error state
    },
  },
});

export default dataFetchMachine;
// src/components/DataFetcher.js
import React from 'react';
import { View, Text, Button, StyleSheet, ActivityIndicator } from 'react-native';
import { useMachine } from '@xstate/react';
import dataFetchMachine from '../machines/dataFetchMachine';

const DataFetcher = () => {
  const [current, send] = useMachine(dataFetchMachine);
  const { data, errorMessage } = current.context;

  return (
    <View style={styles.container}>
      <Text style={styles.title}>Data Fetcher with XState</Text>

      {current.matches('idle') && (
        <Button title="Fetch Data" onPress={() => send('FETCH')} />
      )}

      {current.matches('loading') && (
        <View style={styles.loadingContainer}>
          <ActivityIndicator size="large" color="#0000ff" />
          <Text style={styles.loadingText}>Fetching data...</Text>
          <Button title="Cancel Fetch" onPress={() => send('CANCEL')} color="orange" />
        </View>
      )}

      {current.matches('success') && (
        <View style={styles.resultContainer}>
          <Text style={styles.successText}>{data.message}</Text>
          <Text style={styles.timestamp}>Last fetched: {data.timestamp}</Text>
          <Button title="Re-fetch Data" onPress={() => send('REFATCH')} />
        </View>
      )}

      {current.matches('error') && (
        <View style={styles.errorContainer}>
          <Text style={styles.errorText}>Error: {errorMessage}</Text>
          <Button title="Retry Fetch" onPress={() => send('RETRY')} color="red" />
        </View>
      )}

      <Text style={styles.currentState}>Current State: {current.value}</Text>
    </View>
  );
};

const styles = StyleSheet.create({
  container: {
    flex: 1,
    justifyContent: 'center',
    alignItems: 'center',
    padding: 20,
    backgroundColor: '#f8f8f8',
  },
  title: {
    fontSize: 24,
    fontWeight: 'bold',
    marginBottom: 30,
  },
  loadingContainer: {
    alignItems: 'center',
    marginBottom: 20,
  },
  loadingText: {
    marginTop: 10,
    fontSize: 16,
    color: '#555',
  },
  resultContainer: {
    alignItems: 'center',
    marginBottom: 20,
  },
  successText: {
    fontSize: 18,
    fontWeight: 'bold',
    color: 'green',
    marginBottom: 10,
  },
  timestamp: {
    fontSize: 14,
    color: '#777',
    marginBottom: 15,
  },
  errorContainer: {
    alignItems: 'center',
    marginBottom: 20,
  },
  errorText: {
    fontSize: 18,
    fontWeight: 'bold',
    color: 'red',
    marginBottom: 10,
  },
  currentState: {
    marginTop: 30,
    fontSize: 14,
    color: '#999',
  },
});

export default DataFetcher;

2.3 Core XState Concepts & Hooks

Practical Example: Context, Actions (assign), Guards and Nested States

Let’s expand the toggleMachine to include a count in its context, and only allow toggling if a “password” is correct. This introduces context, assign actions, and guards.

// src/machines/advancedToggleMachine.js
import { createMachine, assign, fromPromise } from 'xstate';

const checkPassword = async ({ passwordGuess }) => {
  return new Promise((resolve, reject) => {
    setTimeout(() => {
      if (passwordGuess === 'secret123') {
        resolve(true);
      } else {
        reject('Incorrect password.');
      }
    }, 500);
  });
};

const advancedToggleMachine = createMachine({
  id: 'advancedToggle',
  initial: 'inactive',
  context: {
    toggleCount: 0,
    passwordGuess: '',
    authError: undefined,
  },
  states: {
    inactive: {
      on: {
        TOGGLE: {
          target: 'authenticating',
          actions: assign({ passwordGuess: ({ event }) => event.password }),
        },
      },
    },
    active: {
      on: {
        TOGGLE: {
          target: 'authenticating',
          actions: assign({ passwordGuess: ({ event }) => event.password }),
        },
      },
    },
    authenticating: {
      invoke: {
        id: 'authenticatePassword',
        src: fromPromise(checkPassword), // Using fromPromise for invoke source
        input: ({ context }) => ({ passwordGuess: context.passwordGuess }),
        onDone: [
          {
            target: 'active',
            guard: ({ context, event }) => context.toggleCount % 2 === 0, // Example guard, toggleCount is even
            actions: assign({
              toggleCount: ({ context }) => context.toggleCount + 1,
              authError: undefined,
            }),
          },
          {
            target: 'inactive',
            guard: ({ context, event }) => context.toggleCount % 2 !== 0, // Example guard, toggleCount is odd
            actions: assign({
              toggleCount: ({ context }) => context.toggleCount + 1,
              authError: undefined,
            }),
          },
        ],
        onError: {
          target: 'inactive', // Go back to inactive on auth error
          actions: assign({
            authError: ({ event }) => event.error.message, // Capture error message
          }),
        },
      },
      on: {
        CANCEL_AUTH: {
          target: 'inactive',
          actions: assign({
            passwordGuess: '',
            authError: undefined,
          }),
        },
      },
    },
  },
});

export default advancedToggleMachine;
// src/components/AdvancedToggleSwitch.js
import React, { useState, useEffect } from 'react';
import { View, Text, Switch, StyleSheet, Button, TextInput, Alert, ActivityIndicator } from 'react-native';
import { useMachine } from '@xstate/react';
import advancedToggleMachine from '../machines/advancedToggleMachine';

const AdvancedToggleSwitch = () => {
  const [current, send] = useMachine(advancedToggleMachine);
  const { toggleCount, authError } = current.context;
  const [passwordInput, setPasswordInput] = useState('');

  const isActive = current.matches('active');
  const isAuthenticating = current.matches('authenticating');

  useEffect(() => {
    if (authError) {
      Alert.alert('Authentication Failed', authError);
      send('RESET_ERROR'); // Custom event to clear error if needed
    }
  }, [authError, send]);

  const handleToggle = () => {
    send({ type: 'TOGGLE', password: passwordInput });
    setPasswordInput(''); // Clear password after attempt
  };

  return (
    <View style={styles.container}>
      <Text style={styles.title}>Advanced Toggle Switch (XState)</Text>

      <View style={styles.statusSection}>
        <Text style={styles.label}>Switch Status: </Text>
        <Text style={[styles.statusText, isActive ? styles.active : styles.inactive]}>
          {isActive ? 'ACTIVE' : 'INACTIVE'}
        </Text>
      </View>

      <View style={styles.countSection}>
        <Text style={styles.label}>Toggle Count: </Text>
        <Text style={styles.countText}>{toggleCount}</Text>
      </View>

      <TextInput
        style={styles.input}
        placeholder="Enter password to toggle"
        secureTextEntry
        value={passwordInput}
        onChangeText={setPasswordInput}
        editable={!isAuthenticating}
      />

      {isAuthenticating ? (
        <View style={styles.authenticatingContainer}>
          <ActivityIndicator size="small" color="#0000ff" />
          <Text style={styles.authenticatingText}>Authenticating...</Text>
          <Button title="Cancel" onPress={() => send('CANCEL_AUTH')} color="orange" />
        </View>
      ) : (
        <Button
          title={isActive ? "Deactivate" : "Activate"}
          onPress={handleToggle}
          disabled={passwordInput.length === 0}
          color={isActive ? "darkred" : "darkgreen"}
        />
      )}

      <Text style={styles.currentState}>Current XState: {current.value}</Text>
    </View>
  );
};

const styles = StyleSheet.create({
  container: {
    flex: 1,
    justifyContent: 'center',
    alignItems: 'center',
    padding: 20,
    backgroundColor: '#e0f7fa',
  },
  title: {
    fontSize: 22,
    fontWeight: 'bold',
    marginBottom: 30,
    color: '#006064',
  },
  statusSection: {
    flexDirection: 'row',
    alignItems: 'center',
    marginBottom: 10,
  },
  label: {
    fontSize: 18,
  },
  statusText: {
    fontSize: 18,
    fontWeight: 'bold',
  },
  active: {
    color: 'green',
  },
  inactive: {
    color: 'red',
  },
  countSection: {
    flexDirection: 'row',
    alignItems: 'center',
    marginBottom: 20,
  },
  countText: {
    fontSize: 18,
    fontWeight: 'bold',
    color: '#01579b',
  },
  input: {
    width: '90%',
    padding: 10,
    borderWidth: 1,
    borderColor: '#ccc',
    borderRadius: 5,
    marginBottom: 20,
    backgroundColor: '#fff',
  },
  authenticatingContainer: {
    alignItems: 'center',
    marginBottom: 20,
  },
  authenticatingText: {
    marginTop: 10,
    marginBottom: 10,
    fontSize: 16,
    color: '#555',
  },
  currentState: {
    marginTop: 30,
    fontSize: 14,
    color: '#999',
  },
});

export default AdvancedToggleSwitch;

2.4 Persistence and Rehydration with XState

Preserving state across app restarts or unmounts is vital.

Practical Example: Persisting DataFetcher state to AsyncStorage

We’ll modify the DataFetcher to save its state to AsyncStorage when it changes and load it on app launch.

// src/machines/dataFetchMachine.js (no changes, machine definition is pure)
// ... (same as before)
// src/components/PersistedDataFetcher.js
import React, { useState, useEffect } from 'react';
import { View, Text, Button, StyleSheet, ActivityIndicator, Alert } from 'react-native';
import { useMachine } from '@xstate/react';
import AsyncStorage from '@react-native-async-storage/async-storage';
import dataFetchMachine from '../machines/dataFetchMachine'; // Your existing machine

const STORAGE_KEY = '@dataFetcherMachineState';

const PersistedDataFetcher = () => {
  const [initialState, setInitialState] = useState(undefined); // To store loaded state
  const [isReady, setIsReady] = useState(false); // To prevent render before load

  // Use the machine, providing the initial state from storage
  const [current, send, service] = useMachine(dataFetchMachine, {
    state: initialState, // Will be undefined initially, then the loaded state
    // You might also need to rehydrate context separately if it's complex,
    // or ensure context serializes/deserializes correctly.
  });
  const { data, errorMessage } = current.context;

  // 1. Load state from AsyncStorage on component mount
  useEffect(() => {
    const loadPersistedState = async () => {
      try {
        const storedState = await AsyncStorage.getItem(STORAGE_KEY);
        if (storedState) {
          // XState's `createMachine` can accept a plain object for state rehydration
          // For complex states, you might need to use `machine.resolveState()`
          setInitialState(JSON.parse(storedState));
        }
      } catch (error) {
        console.error('Failed to load XState from AsyncStorage:', error);
      } finally {
        setIsReady(true);
      }
    };
    loadPersistedState();
  }, []);

  // 2. Subscribe to machine changes and save to AsyncStorage
  useEffect(() => {
    if (!isReady) return; // Only subscribe once initial state is loaded

    const subscription = service.subscribe(async (state) => {
      // Only persist if the state actually changed
      if (state.changed) {
        try {
          // getPersistedSnapshot() serializes the current state for storage
          await AsyncStorage.setItem(STORAGE_KEY, JSON.stringify(state.getPersistedSnapshot()));
          // console.log("XState persisted:", state.value, state.context);
        } catch (error) {
          console.error('Failed to save XState to AsyncStorage:', error);
        }
      }
    });

    return () => subscription.unsubscribe(); // Clean up subscription
  }, [service, isReady]);

  if (!isReady) {
    return (
      <View style={styles.container}>
        <ActivityIndicator size="large" color="#0000ff" />
        <Text style={styles.loadingText}>Loading app state...</Text>
      </View>
    );
  }

  // --- Render logic from DataFetcher.js, slightly adjusted ---
  const handleFetch = () => send('FETCH');
  const handleCancel = () => send('CANCEL');
  const handleRefetch = () => send('REFATCH');
  const handleRetry = () => send('RETRY');

  return (
    <View style={styles.container}>
      <Text style={styles.title}>Persisted Data Fetcher (XState)</Text>

      {current.matches('idle') && (
        <Button title="Fetch Data" onPress={handleFetch} />
      )}

      {current.matches('loading') && (
        <View style={styles.loadingContainer}>
          <ActivityIndicator size="large" color="#0000ff" />
          <Text style={styles.loadingText}>Fetching data...</Text>
          <Button title="Cancel Fetch" onPress={handleCancel} color="orange" />
        </View>
      )}

      {current.matches('success') && (
        <View style={styles.resultContainer}>
          <Text style={styles.successText}>{data.message}</Text>
          <Text style={styles.timestamp}>Last fetched: {data.timestamp}</Text>
          <Button title="Re-fetch Data" onPress={handleRefetch} />
        </View>
      )}

      {current.matches('error') && (
        <View style={styles.errorContainer}>
          <Text style={styles.errorText}>Error: {errorMessage}</Text>
          <Button title="Retry Fetch" onPress={handleRetry} color="red" />
        </View>
      )}

      <Text style={styles.currentState}>Current State: {current.value}</Text>
      <Text style={styles.noteText}>(State persists across app reloads)</Text>
    </View>
  );
};

const styles = StyleSheet.create({
  container: {
    flex: 1,
    justifyContent: 'center',
    alignItems: 'center',
    padding: 20,
    backgroundColor: '#e8f5e9',
  },
  title: {
    fontSize: 24,
    fontWeight: 'bold',
    marginBottom: 30,
    color: '#2e7d32',
  },
  loadingContainer: {
    alignItems: 'center',
    marginBottom: 20,
  },
  loadingText: {
    marginTop: 10,
    fontSize: 16,
    color: '#555',
  },
  resultContainer: {
    alignItems: 'center',
    marginBottom: 20,
  },
  successText: {
    fontSize: 18,
    fontWeight: 'bold',
    color: 'green',
    marginBottom: 10,
  },
  timestamp: {
    fontSize: 14,
    color: '#777',
    marginBottom: 15,
  },
  errorContainer: {
    alignItems: 'center',
    marginBottom: 20,
  },
  errorText: {
    fontSize: 18,
    fontWeight: 'bold',
    color: 'red',
    marginBottom: 10,
  },
  currentState: {
    marginTop: 30,
    fontSize: 14,
    color: '#999',
  },
  noteText: {
    fontSize: 12,
    color: '#666',
    marginTop: 10,
    textAlign: 'center',
  },
});

export default PersistedDataFetcher;

2.5 Actor Model and Global State with XState

For global state, createActorContext makes an XState machine accessible throughout your component tree.

Practical Example: Global Theme and User Preferences with createActorContext

Let’s manage a global theme and potentially user settings using an XState machine wrapped in a context.

// src/machines/appMachine.js
import { createMachine, assign } from 'xstate';
import { fromPromise } from 'xstate/actors'; // Correct import for fromPromise

const saveSettingsToAPI = async ({ theme, notificationsEnabled }) => {
  return new Promise((resolve) => {
    setTimeout(() => {
      console.log(`Saving settings: Theme=${theme}, Notifications=${notificationsEnabled}`);
      resolve({ success: true });
    }, 1000);
  });
};

const appMachine = createMachine({
  id: 'app',
  initial: 'initializing',
  context: {
    theme: 'light', // 'light' | 'dark'
    notificationsEnabled: true,
    isLoadingSettings: false,
    settingsSaveError: undefined,
  },
  states: {
    initializing: {
      entry: assign({ isLoadingSettings: true }),
      exit: assign({ isLoadingSettings: false }),
      invoke: {
        id: 'loadSettings',
        src: fromPromise(async () => {
          // Simulate loading from AsyncStorage or API
          return new Promise((resolve) => {
            setTimeout(() => {
              resolve({ theme: 'dark', notificationsEnabled: false });
            }, 1000);
          });
        }),
        onDone: {
          target: 'ready',
          actions: assign({
            theme: ({ event }) => event.output.theme,
            notificationsEnabled: ({ event }) => event.output.notificationsEnabled,
          }),
        },
        onError: {
          target: 'ready', // Go to ready even on error, with default settings
          actions: ({ context }) => {
            console.error('Failed to load initial settings, using defaults.', context);
          },
        },
      },
    },
    ready: {
      on: {
        TOGGLE_THEME: {
          actions: assign({
            theme: ({ context }) => (context.theme === 'light' ? 'dark' : 'light'),
          }),
        },
        TOGGLE_NOTIFICATIONS: {
          actions: assign({
            notificationsEnabled: ({ context }) => !context.notificationsEnabled,
          }),
        },
        SAVE_SETTINGS: 'savingSettings',
      },
    },
    savingSettings: {
      entry: assign({ isLoadingSettings: true, settingsSaveError: undefined }),
      exit: assign({ isLoadingSettings: false }),
      invoke: {
        id: 'saveSettings',
        src: fromPromise(saveSettingsToAPI),
        input: ({ context }) => ({
          theme: context.theme,
          notificationsEnabled: context.notificationsEnabled,
        }),
        onDone: 'ready',
        onError: {
          target: 'ready',
          actions: assign({
            settingsSaveError: ({ event }) => event.error.message || 'Failed to save settings.',
          }),
        },
      },
    },
  },
});

export default appMachine;
// src/contexts/AppContext.js
import { createActorContext } from '@xstate/react';
import appMachine from '../machines/appMachine';

export const AppContext = createActorContext(appMachine);

// Use this provider at the root of your application
// import { AppContext } from './contexts/AppContext';
// const App = () => (
//   <AppContext.Provider>
//     {/* Your navigation, screens */}
//   </AppContext.Provider>
// );
// src/screens/SettingsScreen.js (Consumer component)
import React from 'react';
import { View, Text, Switch, Button, StyleSheet, ActivityIndicator, Alert } from 'react-native';
import { AppContext } from '../contexts/AppContext'; // Import your context
import { useSelector } from '@xstate/react';

const SettingsScreen = () => {
  const { send } = AppContext.useActorRef(); // Get the send function from the global actor
  // Use useSelector to efficiently get context values without re-rendering if other context changes
  const theme = useSelector(AppContext.useActorRef(), (state) => state.context.theme);
  const notificationsEnabled = useSelector(AppContext.useActorRef(), (state) => state.context.notificationsEnabled);
  const isLoadingSettings = useSelector(AppContext.useActorRef(), (state) => state.context.isLoadingSettings);
  const settingsSaveError = useSelector(AppContext.useActorRef(), (state) => state.context.settingsSaveError);

  React.useEffect(() => {
    if (settingsSaveError) {
      Alert.alert('Save Error', settingsSaveError);
    }
  }, [settingsSaveError]);

  const handleToggleTheme = () => {
    send({ type: 'TOGGLE_THEME' });
  };

  const handleToggleNotifications = () => {
    send({ type: 'TOGGLE_NOTIFICATIONS' });
  };

  const handleSaveSettings = () => {
    send({ type: 'SAVE_SETTINGS' });
  };

  if (isLoadingSettings) {
    return (
      <View style={[styles.container, styles.loadingOverlay]}>
        <ActivityIndicator size="large" color="#0000ff" />
        <Text style={styles.loadingText}>Loading settings...</Text>
      </View>
    );
  }

  return (
    <View style={[styles.container, theme === 'dark' ? styles.darkTheme : styles.lightTheme]}>
      <Text style={[styles.title, theme === 'dark' ? styles.darkText : styles.lightText]}>App Settings</Text>

      <View style={styles.settingItem}>
        <Text style={[styles.settingLabel, theme === 'dark' ? styles.darkText : styles.lightText]}>Dark Mode</Text>
        <Switch
          value={theme === 'dark'}
          onValueChange={handleToggleTheme}
          trackColor={{ false: "#767577", true: "#81b0ff" }}
          thumbColor={theme === 'dark' ? "#f5dd4b" : "#f4f3f4"}
        />
      </View>

      <View style={styles.settingItem}>
        <Text style={[styles.settingLabel, theme === 'dark' ? styles.darkText : styles.lightText]}>Notifications</Text>
        <Switch
          value={notificationsEnabled}
          onValueChange={handleToggleNotifications}
          trackColor={{ false: "#767577", true: "#81b0ff" }}
          thumbColor={notificationsEnabled ? "#f5dd4b" : "#f4f3f4"}
        />
      </View>

      <Button
        title={isLoadingSettings ? "Saving..." : "Save Settings"}
        onPress={handleSaveSettings}
        disabled={isLoadingSettings}
        color="#28a745"
      />
      {isLoadingSettings && <ActivityIndicator size="small" color="#0000ff" style={styles.spinner} />}
    </View>
  );
};

const styles = StyleSheet.create({
  container: {
    flex: 1,
    padding: 20,
    alignItems: 'center',
    justifyContent: 'center',
  },
  loadingOverlay: {
    justifyContent: 'center',
    alignItems: 'center',
  },
  loadingText: {
    marginTop: 10,
    fontSize: 16,
    color: '#555',
  },
  title: {
    fontSize: 26,
    fontWeight: 'bold',
    marginBottom: 40,
  },
  settingItem: {
    flexDirection: 'row',
    justifyContent: 'space-between',
    alignItems: 'center',
    width: '100%',
    paddingVertical: 15,
    borderBottomWidth: 1,
    borderBottomColor: '#eee',
    marginBottom: 10,
  },
  settingLabel: {
    fontSize: 18,
  },
  spinner: {
    marginTop: 10,
  },
  // Theme specific styles
  lightTheme: {
    backgroundColor: '#ffffff',
  },
  darkTheme: {
    backgroundColor: '#333333',
  },
  lightText: {
    color: '#333333',
  },
  darkText: {
    color: '#ffffff',
  },
});

export default SettingsScreen;

3. Performance Benchmarking and Profiling

Achieving and maintaining peak performance requires continuous monitoring and deep dives.

3.1 Understanding Frame Rates and Threads

Practical Example: Monitoring FPS with React Native Performance Monitor

React Native’s built-in performance monitor is your first line of defense.

  1. Open Developer Menu: In your emulator or on a physical device in development mode:
    • iOS Simulator: Cmd + D
    • Android Emulator: Cmd + M or Ctrl + M
    • Physical Device: Shake the device.
  2. Enable Performance Monitor: From the developer menu, select “Show Performance Monitor”.

You will see an overlay displaying:

  • JS FPS: Frames per second rendered by the JavaScript thread. If this drops below 60, your JS logic or rendering is struggling.
  • UI FPS: Frames per second rendered by the native UI thread. If this drops below 60, native UI operations or animations are struggling.
  • JS Thread Memory: Memory usage of the JavaScript thread.

Interpretation:

  • If JS FPS is low but UI FPS is high (e.g., during complex calculations not related to UI updates), the JS thread is busy, but the UI is still responsive. This indicates a need to offload heavy JS tasks.
  • If UI FPS is low, the native UI thread is blocked. This often happens with expensive layout computations, over-draw, or complex native animations.
  • If both are low, you have bottlenecks affecting the entire application.
// There's no direct React Native code to interact with the Performance Monitor overlay itself,
// as it's an internal development tool.
// However, you can write components that stress the system to observe its behavior.

import React, { useState } from 'react';
import { View, Text, StyleSheet, Button, ScrollView, ActivityIndicator } from 'react-native';

// A component that simulates heavy JS computation
const HeavyJSComponent = () => {
  const [result, setResult] = useState(0);
  const [isCalculating, setIsCalculating] = useState(false);

  const performHeavyCalculation = () => {
    setIsCalculating(true);
    let sum = 0;
    // Simulate a blocking JS operation
    for (let i = 0; i < 5_000_000_000; i++) { // 5 Billion iterations, very heavy!
      sum += i;
    }
    setResult(sum);
    setIsCalculating(false);
  };

  return (
    <View style={styles.card}>
      <Text style={styles.cardTitle}>Heavy JS Operation</Text>
      <Text>Result: {result}</Text>
      <Button
        title={isCalculating ? "Calculating..." : "Run Heavy JS"}
        onPress={performHeavyCalculation}
        disabled={isCalculating}
      />
      {isCalculating && <ActivityIndicator size="small" color="#0000ff" />}
      <Text style={styles.cardNote}>
        (Observe JS FPS drop in Performance Monitor)
      </Text>
    </View>
  );
};

// A component that simulates complex UI rendering (e.g., many views)
const HeavyUIComponent = () => {
  const [numItems, setNumItems] = useState(100);

  const renderManyItems = () => {
    const items = [];
    for (let i = 0; i < numItems; i++) {
      items.push(
        <View key={i} style={styles.uiItem}>
          <Text style={styles.uiItemText}>{`Item ${i + 1}`}</Text>
        </View>
      );
    }
    return items;
  };

  return (
    <View style={styles.card}>
      <Text style={styles.cardTitle}>Heavy UI Rendering</Text>
      <Button title={`Add ${numItems} Items`} onPress={() => setNumItems(numItems + 50)} />
      <ScrollView style={styles.scrollView}>
        {renderManyItems()}
      </ScrollView>
      <Text style={styles.cardNote}>
        (Observe UI FPS drop during scroll with many elements)
      </Text>
    </View>
  );
};

const PerformanceMonitorDemo = () => {
  return (
    <View style={styles.container}>
      <Text style={styles.header}>Performance Monitor Demo</Text>
      <Text style={styles.instruction}>
        Enable "Show Performance Monitor" from the Developer Menu
        (Cmd+D/Cmd+M/Shake device) to see the effects below.
      </Text>
      <HeavyJSComponent />
      <HeavyUIComponent />
    </View>
  );
};

const styles = StyleSheet.create({
  container: {
    flex: 1,
    padding: 20,
    backgroundColor: '#fce4ec',
    alignItems: 'center',
  },
  header: {
    fontSize: 24,
    fontWeight: 'bold',
    marginBottom: 10,
    color: '#880e4f',
  },
  instruction: {
    fontSize: 14,
    textAlign: 'center',
    marginBottom: 30,
    color: '#4a148c',
  },
  card: {
    backgroundColor: '#fff',
    borderRadius: 8,
    padding: 15,
    marginBottom: 20,
    width: '90%',
    alignItems: 'center',
    shadowColor: '#000',
    shadowOffset: { width: 0, height: 2 },
    shadowOpacity: 0.1,
    shadowRadius: 3.84,
    elevation: 5,
  },
  cardTitle: {
    fontSize: 18,
    fontWeight: 'bold',
    marginBottom: 10,
    color: '#c2185b',
  },
  cardNote: {
    fontSize: 12,
    marginTop: 10,
    color: '#7b1fa2',
    textAlign: 'center',
  },
  uiItem: {
    backgroundColor: '#e3f2fd',
    padding: 8,
    marginVertical: 4,
    borderRadius: 5,
    width: '100%',
    alignItems: 'center',
  },
  uiItemText: {
    fontSize: 12,
    color: '#1565c0',
  },
  scrollView: {
    height: 150,
    width: '100%',
    marginTop: 10,
  },
});

export default PerformanceMonitorDemo;

3.2 Profiling Tools

Practical Example: Using React DevTools Profiler to Identify Re-renders

The React DevTools Profiler (now integrated into React Native DevTools with Hermes) helps visualize component rendering.

  1. Open React Native DevTools: When your app is running, open the React Native DevTools (it usually opens automatically with npx react-native start).
  2. Go to “Profiler” Tab: Click on the “Profiler” tab.
  3. Start Recording: Click the record button. Interact with your app (e.g., type in an input, click a button).
  4. Stop Recording: Click the record button again.
  5. Analyze Results:
    • Flame Graph/Ranked Chart: Identify components that render frequently or take a long time to render.
    • Highlight Updates: In the “Components” tab, click the “Settings” gear icon and enable “Highlight updates when components render.” This will visually flash components in your app that are re-rendering.
    • Why did this render? For a selected component, the profiler can show “Why did this render?” explaining changes in props or state.

Code Example (to observe in Profiler):

import React, { useState, memo } from 'react';
import { View, Text, Button, StyleSheet, TextInput } from 'react-native';

// This component will re-render if its props change.
const ExpensiveChild = memo(({ count, onIncrement }) => {
  console.log('ExpensiveChild rendered'); // Monitor this in console or profiler
  return (
    <View style={styles.childCard}>
      <Text style={styles.childText}>Child Count: {count}</Text>
      <Button title="Increment Parent Count" onPress={onIncrement} />
    </View>
  );
});

// Another child component that doesn't need to re-render when ExpensiveChild's props change
const StaticChild = memo(() => {
  console.log('StaticChild rendered'); // This should only render once
  return (
    <View style={styles.childCard}>
      <Text style={styles.childText}>I am a static child.</Text>
    </View>
  );
});

const ReactProfilerDemo = () => {
  const [parentCount, setParentCount] = useState(0);
  const [inputValue, setInputValue] = useState('');

  // This function is re-created on every render of ReactProfilerDemo,
  // potentially causing ExpensiveChild to re-render if not memoized.
  const handleIncrement = () => {
    setParentCount(prev => prev + 1);
  };

  // Memoize the function to prevent unnecessary re-renders of child components that depend on it.
  // const memoizedHandleIncrement = useCallback(() => {
  //   setParentCount(prev => prev + 1);
  // }, []); // Dependencies for useCallback

  return (
    <View style={styles.container}>
      <Text style={styles.title}>React Profiler Demo</Text>
      <Text style={styles.instruction}>
        Open React Native DevTools, go to 'Profiler', and enable 'Highlight updates'.
        Observe re-renders while typing vs. clicking the button.
      </Text>

      <View style={styles.parentCard}>
        <Text style={styles.parentText}>Parent Count: {parentCount}</Text>
        <TextInput
          style={styles.input}
          placeholder="Type here..."
          value={inputValue}
          onChangeText={setInputValue}
        />
        <Text>Input: {inputValue}</Text>
      </View>

      {/* When inputValue changes, Parent will re-render.
          If handleIncrement is not memoized with useCallback,
          ExpensiveChild (even with memo) will re-render because its `onIncrement` prop changes.
          StaticChild (with memo) should NOT re-render. */}
      <ExpensiveChild count={parentCount} onIncrement={handleIncrement} />
      <StaticChild />

      <Text style={styles.note}>
        Try uncommenting `useCallback` for `handleIncrement` and observe the difference in `ExpensiveChild` re-renders.
      </Text>
    </View>
  );
};

const styles = StyleSheet.create({
  container: {
    flex: 1,
    padding: 20,
    backgroundColor: '#fff3e0',
    alignItems: 'center',
  },
  title: {
    fontSize: 22,
    fontWeight: 'bold',
    marginBottom: 10,
    color: '#e65100',
  },
  instruction: {
    fontSize: 14,
    textAlign: 'center',
    marginBottom: 20,
    color: '#bf360c',
  },
  parentCard: {
    backgroundColor: '#ffe0b2',
    borderRadius: 8,
    padding: 15,
    marginBottom: 20,
    width: '90%',
    alignItems: 'center',
    shadowColor: '#000',
    shadowOffset: { width: 0, height: 2 },
    shadowOpacity: 0.1,
    shadowRadius: 3.84,
    elevation: 5,
  },
  parentText: {
    fontSize: 18,
    fontWeight: 'bold',
    marginBottom: 10,
    color: '#ff8f00',
  },
  input: {
    width: '80%',
    padding: 8,
    borderWidth: 1,
    borderColor: '#ccc',
    borderRadius: 5,
    marginBottom: 10,
  },
  childCard: {
    backgroundColor: '#f1f8e9',
    borderRadius: 8,
    padding: 10,
    marginVertical: 10,
    width: '80%',
    alignItems: 'center',
    shadowColor: '#000',
    shadowOffset: { width: 0, height: 1 },
    shadowOpacity: 0.08,
    shadowRadius: 2.22,
    elevation: 3,
  },
  childText: {
    fontSize: 16,
    color: '#33691e',
    marginBottom: 5,
  },
  note: {
    fontSize: 12,
    color: '#a1887f',
    marginTop: 20,
    textAlign: 'center',
  }
});

export default ReactProfilerDemo;

3.3 Benchmarking Methodologies

Practical Example: Simple Manual Performance Measurement

For specific interactions, you can use performance.now() to measure execution time.

import React from 'react';
import { View, Text, Button, StyleSheet, Alert } from 'react-native';

const simulateHeavyTask = (durationMs) => {
  const start = performance.now();
  while (performance.now() - start < durationMs) {
    // Blocking the thread
  }
};

const BenchmarkExample = () => {
  const runBenchmark = () => {
    Alert.alert('Benchmarking', 'Running a 2-second blocking task...');
    const startTime = performance.now(); // Start timer
    simulateHeavyTask(2000); // Simulate 2-second block
    const endTime = performance.now(); // End timer
    const duration = endTime - startTime;
    Alert.alert(
      'Benchmark Complete',
      `Task took ${duration.toFixed(2)} ms. (UI was blocked)`
    );
  };

  return (
    <View style={styles.container}>
      <Text style={styles.title}>Manual Performance Benchmark</Text>
      <Text style={styles.instruction}>
        Click the button to run a blocking JavaScript task.
        Observe the UI freeze during the task.
      </Text>
      <Button title="Run Blocking Task (2s)" onPress={runBenchmark} />
      <Text style={styles.note}>
        This demonstrates a blocking task. In real apps, such tasks should be
        offloaded (e.g., to native modules, web workers, or using async operations).
      </Text>
    </View>
  );
};

const styles = StyleSheet.create({
  container: {
    flex: 1,
    justifyContent: 'center',
    alignItems: 'center',
    padding: 20,
    backgroundColor: '#f3e5f5',
  },
  title: {
    fontSize: 22,
    fontWeight: 'bold',
    marginBottom: 20,
    color: '#4a148c',
  },
  instruction: {
    fontSize: 16,
    textAlign: 'center',
    marginBottom: 30,
    color: '#6a1b9a',
  },
  note: {
    fontSize: 12,
    color: '#ab47bc',
    marginTop: 20,
    textAlign: 'center',
  },
});

export default BenchmarkExample;

3.4 Common Performance Bottlenecks and Advanced Solutions

Practical Example: Optimizing FlatList with getItemLayout and FlashList (Conceptual)

Efficient list rendering is critical. getItemLayout significantly improves FlatList performance for fixed-height items. For variable heights or extreme performance, FlashList is superior.

1. FlatList with getItemLayout:

import React from 'react';
import { View, Text, FlatList, StyleSheet } from 'react-native';

const DATA = Array.from({ length: 1000 }).map((_, i) => ({
  id: String(i),
  title: `Item #${i + 1}`,
  description: `This is a description for item ${i + 1}.`,
}));

const ITEM_HEIGHT = 80; // Assuming each item has a fixed height of 80px

const FlatListOptimization = () => {
  const renderItem = ({ item }) => (
    <View style={styles.item}>
      <Text style={styles.title}>{item.title}</Text>
      <Text style={styles.description}>{item.description}</Text>
    </View>
  );

  const getItemLayout = (data, index) => ({
    length: ITEM_HEIGHT,
    offset: ITEM_HEIGHT * index,
    index,
  });

  return (
    <View style={styles.container}>
      <Text style={styles.header}>Optimized FlatList</Text>
      <FlatList
        data={DATA}
        renderItem={renderItem}
        keyExtractor={item => item.id}
        getItemLayout={getItemLayout} // Critical for performance with fixed-height items
        initialNumToRender={10} // Render more items initially
        maxToRenderPerBatch={5} // Control how many items render in each batch
        windowSize={21} // Keep more items in memory around the viewport
      />
      <Text style={styles.note}>
        `getItemLayout` prevents `FlatList` from measuring items, improving scroll performance.
      </Text>
    </View>
  );
};

const styles = StyleSheet.create({
  container: {
    flex: 1,
    paddingTop: 50,
    backgroundColor: '#e0f2f7',
  },
  header: {
    fontSize: 22,
    fontWeight: 'bold',
    textAlign: 'center',
    marginBottom: 20,
    color: '#01579b',
  },
  item: {
    height: ITEM_HEIGHT, // Ensure this matches ITEM_HEIGHT
    backgroundColor: '#fff',
    padding: 15,
    marginVertical: 8,
    marginHorizontal: 16,
    borderRadius: 8,
    shadowColor: '#000',
    shadowOffset: { width: 0, height: 1 },
    shadowOpacity: 0.2,
    shadowRadius: 1.41,
    elevation: 2,
  },
  title: {
    fontSize: 16,
    fontWeight: 'bold',
  },
  description: {
    fontSize: 14,
    color: '#555',
    marginTop: 5,
  },
  note: {
    fontSize: 12,
    color: '#666',
    textAlign: 'center',
    marginTop: 10,
    marginBottom: 20,
  }
});

export default FlatListOptimization;

2. FlashList (Conceptual Code):

FlashList uses a different API and is a drop-in replacement that often provides better performance out-of-the-box, especially for variable-height items, by measuring items synchronously on the native side.

// npm install @shopify/flash-list
import React from 'react';
import { View, Text, StyleSheet } from 'react-native';
import { FlashList } from '@shopify/flash-list';

const DATA_FLASH = Array.from({ length: 5000 }).map((_, i) => ({
  id: String(i),
  title: `Flash Item #${i + 1}`,
  description: `This is a description for item ${i + 1} with variable height.`,
  // Simulate variable height: some items have longer descriptions
  longDescription: i % 7 === 0 ? `This is a much longer description for item ${i + 1} to demonstrate FlashList's efficiency with variable heights. It can span multiple lines.` : null,
}));

const FlashListOptimization = () => {
  const renderItem = ({ item }) => (
    <View style={styles.flashItem}>
      <Text style={styles.flashTitle}>{item.title}</Text>
      <Text style={styles.flashDescription}>
        {item.description}
        {item.longDescription && `\n${item.longDescription}`}
      </Text>
    </View>
  );

  return (
    <View style={styles.container}>
      <Text style={styles.header}>FlashList for Extreme Performance</Text>
      <FlashList
        data={DATA_FLASH}
        renderItem={renderItem}
        keyExtractor={item => item.id}
        estimatedItemSize={100} // Crucial: estimate average item height
        contentContainerStyle={styles.flashListContent}
      />
      <Text style={styles.note}>
        `FlashList` provides superior performance for very long or variable-height lists,
        especially when `estimatedItemSize` is accurate.
      </Text>
    </View>
  );
};

const styles = StyleSheet.create({
  container: {
    flex: 1,
    paddingTop: 50,
    backgroundColor: '#fff8e1',
  },
  header: {
    fontSize: 22,
    fontWeight: 'bold',
    textAlign: 'center',
    marginBottom: 20,
    color: '#ff6f00',
  },
  flashListContent: {
    paddingHorizontal: 16,
  },
  flashItem: {
    backgroundColor: '#fff',
    padding: 15,
    marginVertical: 8,
    borderRadius: 8,
    shadowColor: '#000',
    shadowOffset: { width: 0, height: 1 },
    shadowOpacity: 0.2,
    shadowRadius: 1.41,
    elevation: 2,
  },
  flashTitle: {
    fontSize: 16,
    fontWeight: 'bold',
  },
  flashDescription: {
    fontSize: 14,
    color: '#555',
    marginTop: 5,
  },
  note: {
    fontSize: 12,
    color: '#666',
    textAlign: 'center',
    marginTop: 10,
    marginBottom: 20,
  }
});

export default FlashListOptimization;

4. Continuous Integration/Continuous Deployment (CI/CD) for Mobile

CI/CD automates building, testing, and deploying your app, accelerating the release cycle.

4.1 Importance of CI/CD for React Native

Practical Example: A Conceptual CI/CD README.md Checklist

While CI/CD itself is not React Native code, a well-structured README.md or CONTRIBUTING.md can outline the CI/CD process for developers.

# MyAwesomeReactNativeApp CI/CD Workflow

This document outlines the Continuous Integration and Continuous Deployment (CI/CD) pipeline for our React Native application. Our goal is to ensure rapid, reliable, and high-quality releases across iOS and Android.

## Workflow Overview

Our CI/CD pipeline is triggered on every `git push` to `main`, `develop`, and on every Pull Request.

1.  **Code Quality Checks (Linting, Formatting, Type Checks):**
    *   Runs ESLint, Prettier, and TypeScript checks.
    *   **Tool:** GitHub Actions (`.github/workflows/quality.yml`)
    *   **Trigger:** `push` on any branch, `pull_request`
    *   **Fails if:** Any linting error, formatting issue, or TypeScript error.

2.  **Automated Testing:**
    *   **Unit Tests:** Jest tests for isolated functions and components.
    *   **Component Tests:** React Native Testing Library tests for component rendering and interactions.
    *   **Tool:** GitHub Actions (`.github/workflows/test.yml`)
    *   **Trigger:** `push` on any branch, `pull_request`
    *   **Fails if:** Any test fails or code coverage thresholds are not met.

3.  **Native Build Generation:**
    *   **Android:** Generates AAB (Android App Bundle) for release, APK for debugging.
    *   **iOS:** Generates IPA (iOS App Archive) for release, local build for debugging.
    *   **Tools:**
        *   Expo Application Services (EAS) for Expo projects (`eas.json`, `.github/workflows/eas-build.yml`)
        *   GitHub Actions with Fastlane for bare workflow (`.github/workflows/native-build.yml`)
    *   **Trigger:** `push` to `main` or `develop`. Manual trigger for specific builds.
    *   **Fails if:** Native compilation errors, signing issues.

4.  **Internal Distribution (CD - Alpha/Beta Testing):**
    *   **Android:** Distributes APK/AAB to Firebase App Distribution or Play Console (Internal Test Track).
    *   **iOS:** Distributes IPA to TestFlight.
    *   **Tools:** EAS Submit, Fastlane (via GitHub Actions).
    *   **Trigger:** Successful `main` branch build.
    *   **Notifies:** Slack channel `#app-releases`.

5.  **Store Submission (CD - Production):**
    *   **Android:** Automates submission to Google Play Store (Production Track).
    *   **iOS:** Automates submission to Apple App Store.
    *   **Tools:** EAS Submit, Fastlane (via GitHub Actions).
    *   **Trigger:** Manual approval after successful internal testing, or a designated release branch merge.

## Best Practices & Notes

*   **Version Bumping:** Automated with Fastlane/EAS build hooks.
*   **Environment Variables:** Managed via CI/CD platform secrets (e.g., GitHub Secrets, EAS Secrets).
*   **Code Signing:** Handled by EAS for Expo or Fastlane Match for bare workflow.
*   **Performance Monitoring:** `Reassure` integrated into CI to prevent performance regressions.
*   **Security Scanning:** Static analysis tools run on all new code.

---

### Local CI/CD Pre-checks

Before pushing, please run:

```bash
npm run lint       # ESLint & Prettier
npm run typecheck  # TypeScript compiler check
npm test           # Jest unit & component tests

This helps catch issues before they reach the pipeline.


### 4.2 Key Stages of a React Native CI/CD Pipeline

**Practical Example: Simplified GitHub Actions Workflow for Linting & Testing**

This workflow runs when code is pushed or a pull request is opened, performing basic quality checks.

**`.github/workflows/ci.yml`:**

```yaml
name: React Native CI

on:
  push:
    branches:
      - main
      - develop
  pull_request:
    branches:
      - main
      - develop

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout repository
        uses: actions/checkout@v4

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: 18.x # Use a specific Node.js version
          cache: 'npm'       # Cache npm dependencies

      - name: Install dependencies
        run: npm ci # Use npm ci for clean installs in CI

      - name: Run ESLint
        run: npm run lint # Assuming you have a 'lint' script in package.json

      - name: Run TypeScript check
        run: npm run typecheck # Assuming 'typecheck' script in package.json (tsc --noEmit)

      - name: Run Jest tests
        run: npm test -- --coverage # Run tests with coverage reporting
        env:
          CI: true # Set CI environment variable for Jest

      # Optional: Upload test reports or coverage if needed
      # - name: Upload Jest Coverage Report
      #   uses: actions/upload-artifact@v4
      #   if: always()
      #   with:
      #     name: jest-coverage-report
      #     path: coverage/lcov-report

package.json scripts (for the above workflow to work):

{
  "name": "my-react-native-app",
  "version": "0.0.1",
  "scripts": {
    "start": "react-native start",
    "android": "react-native run-android",
    "ios": "react-native run-ios",
    "lint": "eslint . --ext .js,.jsx,.ts,.tsx",
    "typecheck": "tsc --noEmit",
    "test": "jest"
  },
  "devDependencies": {
    "@babel/preset-env": "^7.20.0",
    "@babel/runtime": "^7.20.0",
    "@react-native/babel-preset": "0.74.83",
    "@react-native/eslint-config": "0.74.83",
    "@react-native/metro-config": "0.74.83",
    "@react-native/typescript-config": "0.74.83",
    "@types/react": "^18.2.6",
    "@types/react-test-renderer": "^18.0.0",
    "babel-jest": "^29.6.3",
    "eslint": "^8.19.0",
    "jest": "^29.6.3",
    "prettier": "2.8.8",
    "react-test-renderer": "18.2.0",
    "typescript": "5.1.6"
  }
}

4.3 CI/CD Tools for React Native

Practical Example: EAS Build & Update for Expo Project

EAS greatly simplifies CI/CD for Expo projects.

eas.json (Configuration file for EAS CLI):

{
  "build": {
    "development": {
      "developmentClient": true,
      "distribution": "internal",
      "ios": {
        "simulator": true // Build for simulator for faster dev builds
      }
    },
    "preview": {
      "distribution": "internal",
      "android": {
        "buildType": "apk" // For easy internal sharing
      },
      "ios": {
        "buildConfiguration": "Release" // Release build for testing
      }
    },
    "production": {
      "distribution": "store",
      "autoIncrement": true // Automatically increments build number
    }
  },
  "submit": {
    "production": {
      "ios": {
        "appleId": "your-apple-id@example.com",
        "appSpecificPassword": "your-app-specific-password",
        "ascAppId": "1234567890", // Apple App Store Connect App ID
        "sku": "unique-sku",
        "teamId": "YOUR_TEAM_ID"
      },
      "android": {
        "serviceAccountKeyPath": "./google-service-account.json",
        "track": "production"
      }
    }
  },
  "cli": {
    "version": ">=7.0.0"
  }
}

GitHub Actions workflow to trigger EAS builds (.github/workflows/eas-build.yml):

name: EAS Build and Publish

on:
  push:
    branches:
      - main
  workflow_dispatch: # Allows manual trigger

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout repository
        uses: actions/checkout@v4

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: 18.x
          cache: 'npm'

      - name: Setup Expo CLI
        uses: expo/expo-github-action@v8
        with:
          eas-version: latest
          token: ${{ secrets.EXPO_TOKEN }} # Required for EAS commands

      - name: Install dependencies
        run: npm ci

      - name: Build Android production app
        run: eas build --platform android --profile production --non-interactive --wait
        # --non-interactive is for CI environments
        # --wait waits for the build to finish (or remove to run in background)

      - name: Build iOS production app
        run: eas build --platform ios --profile production --non-interactive --wait

      - name: Publish update (OTA)
        run: eas update --branch main --message "Auto-publish on main branch update"
        env:
          # EXPO_TOKEN is needed for eas update as well
          EXPO_TOKEN: ${{ secrets.EXPO_TOKEN }}

Important: Ensure you set EXPO_TOKEN as a GitHub Secret. Generate it from expo token create. Also, configure Apple/Google credentials with eas credentials or directly in eas.json securely.

4.4 Advanced CI/CD Practices

Practical Example: Automated Version Bumping with fastlane (for Bare Workflow)

Fastlane automates complex mobile deployment tasks, including versioning.

fastlane/Fastfile (Snippet for iOS versioning):

platform :ios do
  desc "Automatically increment build number and upload to TestFlight"
  lane :beta do
    # 1. Increment build number
    increment_build_number(
      build_number: Time.now.strftime("%Y%m%d%H%M%S") # Use timestamp for unique build
    )
    # Alternatively, use latest build number + 1 from TestFlight/App Store Connect
    # latest_build_number = app_store_connect_api_key(key_id: "...", issuer_id: "...").latest_build_number
    # increment_build_number(build_number: latest_build_number + 1)

    # 2. Update React Native JavaScript bundle version if needed
    # This might be specific to your setup, often `package.json` is the source.
    # sh("node ./scripts/update-js-bundle-version.js")

    # 3. Build the app
    build_app(workspace: "ios/YourApp.xcworkspace", scheme: "YourApp", configuration: "Release")

    # 4. Upload to TestFlight
    upload_to_testflight(
      username: "your-apple-id@example.com",
      api_key_path: "./fastlane/AuthKey_YOURKEYID.p8", # Path to your App Store Connect API Key
      team_id: "YOUR_TEAM_ID"
    )

    # 5. Notify Slack
    slack(
      message: "iOS Beta build #{lane_context[SharedValues::BUILD_NUMBER]} uploaded to TestFlight!",
      slack_url: ENV["SLACK_WEBHOOK_URL"]
    )
  end
end

GitHub Actions workflow to trigger Fastlane (.github/workflows/ios-deploy.yml):

name: iOS Deploy with Fastlane

on:
  push:
    branches:
      - main
  workflow_dispatch:

jobs:
  deploy-ios:
    runs-on: macos-latest # iOS builds require macOS runners

    steps:
      - name: Checkout repository
        uses: actions/checkout@v4

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: 18.x
          cache: 'npm'

      - name: Install JS dependencies
        run: npm ci

      - name: Setup Ruby
        uses: ruby/setup-ruby@v1
        with:
          ruby-version: '3.1' # Specify your Ruby version
          bundler-cache: true # Installs bundler and gems

      - name: Install CocoaPods
        run: |
          cd ios
          bundle exec pod install          

      - name: Setup Fastlane (download required keys/profiles if not in repo)
        env:
          # These should be GitHub Secrets
          FASTLANE_APPLE_ISSUER_ID: ${{ secrets.FASTLANE_APPLE_ISSUER_ID }}
          FASTLANE_APPLE_KEY_ID: ${{ secrets.FASTLANE_APPLE_KEY_ID }}
          FASTLANE_APPLE_KEY_CONTENT: ${{ secrets.FASTLANE_APPLE_KEY_CONTENT }} # Content of .p8 file
          FASTLANE_TEAM_ID: ${{ secrets.FASTLANE_TEAM_ID }}
          # ... other env variables for Match, etc.
        run: |
          # Example: Create .p8 key file from secret content
          echo "${FASTLANE_APPLE_KEY_CONTENT}" > ./fastlane/AuthKey_${FASTLANE_APPLE_KEY_ID}.p8
          # If using fastlane match to manage certificates/profiles:
          # bundle exec fastlane match appstore --readonly # or sync
          # This step often requires extensive environment setup.          

      - name: Run Fastlane Beta
        run: bundle exec fastlane ios beta
        env:
          SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }} # For Slack notifications
          # FASTLANE_PASSWORD: ${{ secrets.FASTLANE_PASSWORD }} # If using username/password auth

Note: Fastlane setup, especially for iOS code signing (certificates, provisioning profiles), can be complex. fastlane match helps a lot by syncing credentials. You’ll need to set up App Store Connect API Keys (.p8 file) as secrets.

5. Integrating Machine Learning and AI with React Native

Bringing AI capabilities to mobile applications can create powerful experiences.

5.1 On-Device ML (Edge AI)

Running AI models directly on the device for offline capability, low latency, and privacy.

Practical Example: Image Classification with @tensorflow/tfjs-react-native and expo-image-picker

This example demonstrates how to load a pre-trained MobileNet model and classify an image selected from the device gallery.

// src/components/TFJSImageClassifier.js
import React, { useState, useEffect, useRef } from 'react';
import { View, Text, Button, Image, StyleSheet, ActivityIndicator, Alert, ScrollView } from 'react-native';
import * as tf from '@tensorflow/tfjs';
import * as mobilenet from '@tensorflow-models/mobilenet'; // Pre-trained model
import * as ImagePicker from 'expo-image-picker';
import * as FileSystem from 'expo-file-system';
import { decode as jpegDecode } from 'jpeg-js';
import { decode as pngDecode } from 'pngjs';
import '@tensorflow/tfjs-react-native'; // Import TF.js React Native backend

const TFJSImageClassifier = () => {
  const [isTfReady, setIsTfReady] = useState(false);
  const [model, setModel] = useState(null);
  const [imageUri, setImageUri] = useState(null);
  const [predictions, setPredictions] = useState([]);
  const [isLoading, setIsLoading] = useState(false);

  useEffect(() => {
    async function setupTensorflow() {
      try {
        await tf.ready(); // Initialize TensorFlow.js
        console.log('TensorFlow.js is ready.');
        const mobilenetModel = await mobilenet.load({
          version: 1, // '1' for MobileNetV1
          alpha: 0.25 // Smaller, faster model
        }); // Load pre-trained MobileNet model
        setModel(mobilenetModel);
        setIsTfReady(true);
        console.log('MobileNet model loaded.');
      } catch (error) {
        console.error('Failed to setup TensorFlow or load model:', error);
        Alert.alert('Error', `Failed to load ML components: ${error.message}`);
      }
    }
    setupTensorflow();
  }, []);

  const pickImage = async () => {
    const { status } = await ImagePicker.requestMediaLibraryPermissionsAsync();
    if (status !== 'granted') {
      Alert.alert('Permission required', 'We need camera roll permissions to select images.');
      return;
    }

    let result = await ImagePicker.launchImageLibraryAsync({
      mediaTypes: ImagePicker.MediaTypeOptions.Images,
      allowsEditing: true,
      aspect: [4, 3],
      quality: 0.8,
    });

    if (!result.canceled) {
      setImageUri(result.assets[0].uri);
      setPredictions([]);
      classifyImage(result.assets[0].uri); // Classify right after picking
    }
  };

  const imageToTensor = async (uri) => {
    // Read the image file as base64
    const imgB64 = await FileSystem.readAsStringAsync(uri, {
      encoding: FileSystem.EncodingType.Base64,
    });

    // Determine image type (can extend to check more types)
    const isJPEG = uri.toLowerCase().endsWith('.jpeg') || uri.toLowerCase().endsWith('.jpg');
    const isPNG = uri.toLowerCase().endsWith('.png');

    let rawImageData;
    if (isJPEG) {
      const imgBuffer = tf.util.encodeString(imgB64, 'base64').buffer;
      rawImageData = jpegDecode(imgBuffer, { use: 'Uint8Array' });
    } else if (isPNG) {
      const imgBuffer = tf.util.encodeString(imgB64, 'base64').buffer;
      rawImageData = pngDecode(imgBuffer);
    } else {
      throw new Error('Unsupported image format. Only JPEG and PNG are supported.');
    }

    // Create a tensor from the image's pixel data
    const imageTensor = tf.browser.fromPixels(rawImageData, 3); // 3 channels for RGB

    // Resize and normalize the image for MobileNet (224x224, values between 0 and 1)
    const resized = tf.image.resizeBilinear(imageTensor, [224, 224]).toFloat();
    const normalized = resized.div(255); // Normalize to [0, 1] range

    // Add a batch dimension [1, 224, 224, 3]
    const batched = normalized.expandDims(0);

    // Dispose of intermediate tensors to free up memory
    imageTensor.dispose();
    resized.dispose();
    normalized.dispose();

    return batched;
  };

  const classifyImage = async (uriToClassify) => {
    if (!uriToClassify || !model || !isTfReady) {
      Alert.alert('Error', 'TensorFlow or model not ready.');
      return;
    }

    setIsLoading(true);
    setPredictions([]); // Clear previous predictions
    let imageTensor;
    try {
      imageTensor = await imageToTensor(uriToClassify);
      const newPredictions = await model.classify(imageTensor);
      setPredictions(newPredictions);
    } catch (error) {
      console.error('Error classifying image:', error);
      Alert.alert('Classification Error', error.message || 'Failed to classify image.');
    } finally {
      setIsLoading(false);
      if (imageTensor) imageTensor.dispose(); // Ensure tensor is disposed
    }
  };

  return (
    <View style={styles.container}>
      <Text style={styles.title}>TF.js Image Classifier</Text>

      {!isTfReady || !model ? (
        <View>
          <ActivityIndicator size="large" color="#0000ff" />
          <Text style={styles.loadingText}>Loading ML model...</Text>
        </View>
      ) : (
        <>
          <Button title="Select Image" onPress={pickImage} />
          {imageUri && (
            <Image
              source={{ uri: imageUri }}
              style={styles.image}
              resizeMode="contain"
            />
          )}

          {isLoading && (
            <View style={styles.loadingContainer}>
              <ActivityIndicator size="small" color="#0000ff" />
              <Text>Classifying...</Text>
            </View>
          )}

          {predictions.length > 0 && (
            <ScrollView style={styles.predictionList}>
              <Text style={styles.predictionHeader}>Predictions:</Text>
              {predictions.map((p, i) => (
                <Text key={i} style={styles.predictionItem}>
                  {p.className}: {(p.probability * 100).toFixed(2)}%
                </Text>
              ))}
            </ScrollView>
          )}

          <Text style={styles.noteText}>
            (Uses MobileNetV1 on device for image classification)
          </Text>
        </>
      )}
    </View>
  );
};

const styles = StyleSheet.create({
  container: {
    flex: 1,
    justifyContent: 'center',
    alignItems: 'center',
    padding: 20,
    backgroundColor: '#fffde7',
  },
  title: {
    fontSize: 24,
    fontWeight: 'bold',
    marginBottom: 20,
    color: '#ff6f00',
  },
  loadingText: {
    marginTop: 10,
    fontSize: 16,
    color: '#555',
  },
  image: {
    width: 250,
    height: 250,
    marginVertical: 20,
    borderColor: '#ccc',
    borderWidth: 1,
    borderRadius: 8,
  },
  loadingContainer: {
    flexDirection: 'row',
    alignItems: 'center',
    marginTop: 10,
  },
  predictionList: {
    marginTop: 20,
    maxHeight: 150,
    width: '100%',
    paddingHorizontal: 10,
    borderWidth: 1,
    borderColor: '#eee',
    borderRadius: 8,
    backgroundColor: '#fff',
  },
  predictionHeader: {
    fontSize: 18,
    fontWeight: 'bold',
    marginBottom: 10,
    color: '#424242',
  },
  predictionItem: {
    fontSize: 16,
    marginBottom: 5,
    color: '#333',
  },
  noteText: {
    fontSize: 12,
    color: '#777',
    marginTop: 20,
    textAlign: 'center',
  },
});

export default TFJSImageClassifier;

To run this example:

  1. npm install @tensorflow/tfjs @tensorflow-models/mobilenet jpeg-js pngjs expo-image-picker expo-file-system
  2. Import and use TFJSImageClassifier in your App.js.
  3. Ensure your babel.config.js is set up for @tensorflow/tfjs-react-native.

5.2 Cloud-Based ML/AI

For more complex models or dynamic AI capabilities, cloud-based services are preferred.

Practical Example: Integrating with Google Cloud Vision API (Text Detection)

This example shows how to send an image (as base64) to Google Cloud Vision API for text detection. This involves a server-side component, but here we’ll simulate the client-side interaction.

Conceptual Server-side (/api/detectText endpoint - e.g., Node.js with Express):

// This is server-side code, not React Native.
// app.js (Express server)
const express = require('express');
const { ImageAnnotatorClient } = require('@google-cloud/vision');
const cors = require('cors');

const app = express();
app.use(express.json({ limit: '50mb' })); // Increase limit for image data
app.use(cors()); // Enable CORS for local development

// Path to your Google Cloud service account key file
// This should NOT be in your client-side app!
const client = new ImageAnnotatorClient({
  keyFilename: '/path/to/your/google-cloud-key.json' // Replace with your key path
});

app.post('/api/detectText', async (req, res) => {
  try {
    const { imageBase64 } = req.body;
    if (!imageBase64) {
      return res.status(400).json({ error: 'No imageBase64 provided.' });
    }

    const [result] = await client.textDetection({
      image: {
        content: imageBase64,
      },
    });

    const detections = result.textAnnotations;
    if (detections && detections.length > 0) {
      res.json({ fullText: detections[0].description, annotations: detections });
    } else {
      res.json({ fullText: 'No text detected.', annotations: [] });
    }
  } catch (error) {
    console.error('Error detecting text:', error);
    res.status(500).json({ error: 'Internal server error.' });
  }
});

const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
  console.log(`Server running on port ${PORT}`);
});

React Native Client-side (src/components/CloudVisionTextDetector.js):

import React, { useState } from 'react';
import { View, Text, Button, Image, StyleSheet, ActivityIndicator, Alert, ScrollView } from 'react-native';
import * as ImagePicker from 'expo-image-picker';
import * as FileSystem from 'expo-file-system';

const CLOUD_VISION_API_ENDPOINT = 'http://localhost:3000/api/detectText'; // Your server endpoint

const CloudVisionTextDetector = () => {
  const [imageUri, setImageUri] = useState(null);
  const [detectedText, setDetectedText] = useState('');
  const [isLoading, setIsLoading] = useState(false);

  const pickImage = async () => {
    const { status } = await ImagePicker.requestMediaLibraryPermissionsAsync();
    if (status !== 'granted') {
      Alert.alert('Permission required', 'We need camera roll permissions to select images.');
      return;
    }

    let result = await ImagePicker.launchImageLibraryAsync({
      mediaTypes: ImagePicker.MediaTypeOptions.Images,
      allowsEditing: true,
      aspect: [4, 3],
      quality: 0.7, // Lower quality for faster upload
      base64: true, // Crucial: get image as base64
    });

    if (!result.canceled) {
      setImageUri(result.assets[0].uri);
      detectTextFromImage(result.assets[0].base64); // Send base64 directly
    }
  };

  const detectTextFromImage = async (base64Image) => {
    setIsLoading(true);
    setDetectedText('');
    try {
      const response = await fetch(CLOUD_VISION_API_ENDPOINT, {
        method: 'POST',
        headers: {
          'Content-Type': 'application/json',
        },
        body: JSON.stringify({ imageBase64: base64Image }),
      });

      const data = await response.json();

      if (response.ok) {
        setDetectedText(data.fullText || 'No text detected.');
        console.log('Full Text:', data.fullText);
        console.log('Annotations:', data.annotations);
      } else {
        Alert.alert('API Error', data.error || 'Unknown error from Cloud Vision API.');
      }
    } catch (error) {
      console.error('Network or client error during text detection:', error);
      Alert.alert('Error', `Could not connect to server or process image: ${error.message}`);
    } finally {
      setIsLoading(false);
    }
  };

  return (
    <View style={styles.container}>
      <Text style={styles.title}>Cloud Vision Text Detector</Text>

      <Button title="Select Image for Text Detection" onPress={pickImage} />

      {imageUri && (
        <Image
          source={{ uri: imageUri }}
          style={styles.image}
          resizeMode="contain"
        />
      )}

      {isLoading && (
        <View style={styles.loadingContainer}>
          <ActivityIndicator size="small" color="#0000ff" />
          <Text>Detecting text...</Text>
        </View>
      )}

      {detectedText !== '' && (
        <ScrollView style={styles.textResultContainer}>
          <Text style={styles.textResultHeader}>Detected Text:</Text>
          <Text style={styles.detectedText}>{detectedText}</Text>
        </ScrollView>
      )}

      <Text style={styles.noteText}>
        (This requires a running backend server that communicates with Google Cloud Vision API.)
      </Text>
    </View>
  );
};

const styles = StyleSheet.create({
  container: {
    flex: 1,
    justifyContent: 'center',
    alignItems: 'center',
    padding: 20,
    backgroundColor: '#e0f2f7',
  },
  title: {
    fontSize: 24,
    fontWeight: 'bold',
    marginBottom: 20,
    color: '#01579b',
  },
  image: {
    width: 280,
    height: 210, // Aspect ratio 4:3
    marginVertical: 20,
    borderColor: '#ccc',
    borderWidth: 1,
    borderRadius: 8,
  },
  loadingContainer: {
    flexDirection: 'row',
    alignItems: 'center',
    marginTop: 10,
    marginBottom: 20,
  },
  textResultContainer: {
    marginTop: 20,
    maxHeight: 200,
    width: '100%',
    padding: 10,
    borderWidth: 1,
    borderColor: '#ddd',
    borderRadius: 8,
    backgroundColor: '#fff',
  },
  textResultHeader: {
    fontSize: 18,
    fontWeight: 'bold',
    marginBottom: 10,
    color: '#333',
  },
  detectedText: {
    fontSize: 16,
    color: '#555',
  },
  noteText: {
    fontSize: 12,
    color: '#777',
    marginTop: 30,
    textAlign: 'center',
  },
});

export default CloudVisionTextDetector;

To run this example:

  1. Set up the Node.js server with @google-cloud/vision and your service account key.
  2. Install expo-image-picker and expo-file-system in your React Native project.
  3. Import and use CloudVisionTextDetector in your App.js.
  4. Ensure your React Native app can reach http://localhost:3000 (e.g., using your computer’s IP address instead of localhost on physical devices, or by configuring reverse proxy for Android emulators).

5.3 Speech Recognition and Synthesis

Practical Example: Speech-to-Text with react-native-voice

react-native-voice provides a unified JavaScript interface for native speech recognition APIs.

// src/components/SpeechToText.js
import React, { useState, useEffect } from 'react';
import { View, Text, Button, StyleSheet, PermissionsAndroid, Platform, Alert } from 'react-native';
import Voice from '@react-native-voice/voice';

const SpeechToText = () => {
  const [recognizedText, setRecognizedText] = useState('');
  const [isRecording, setIsRecording] = useState(false);
  const [error, setError] = useState('');

  useEffect(() => {
    // Event handlers for Voice module
    Voice.onSpeechStart = onSpeechStart;
    Voice.onSpeechEnd = onSpeechEnd;
    Voice.onSpeechResults = onSpeechResults;
    Voice.onSpeechError = onSpeechError;
    Voice.onSpeechPartialResults = onSpeechPartialResults; // Live transcription

    return () => {
      Voice.destroy().then(Voice.removeAllListeners);
    };
  }, []);

  const requestMicrophonePermission = async () => {
    if (Platform.OS === 'android') {
      try {
        const granted = await PermissionsAndroid.request(
          PermissionsAndroid.PERMISSIONS.RECORD_AUDIO,
          {
            title: 'Microphone Permission',
            message: 'This app needs access to your microphone to enable speech recognition.',
            buttonNeutral: 'Ask Me Later',
            buttonNegative: 'Cancel',
            buttonPositive: 'OK',
          }
        );
        if (granted === PermissionsAndroid.RESULT_GRANTED) {
          console.log('Microphone permission granted');
          return true;
        } else {
          console.log('Microphone permission denied');
          Alert.alert('Permission Denied', 'Microphone access is required for speech recognition.');
          return false;
        }
      } catch (err) {
        console.warn(err);
        return false;
      }
    }
    return true; // iOS handles permissions differently, often at first use
  };

  const onSpeechStart = (e) => {
    console.log('onSpeechStart: ', e);
    setIsRecording(true);
    setError('');
    setRecognizedText('');
  };

  const onSpeechEnd = (e) => {
    console.log('onSpeechEnd: ', e);
    setIsRecording(false);
  };

  const onSpeechResults = (e) => {
    console.log('onSpeechResults: ', e);
    if (e.value && e.value.length > 0) {
      setRecognizedText(e.value[0]);
    }
  };

  const onSpeechPartialResults = (e) => {
    // console.log('onSpeechPartialResults: ', e);
    if (e.value && e.value.length > 0) {
      setRecognizedText(e.value[0]); // Update with partial results
    }
  };

  const onSpeechError = (e) => {
    console.log('onSpeechError: ', e);
    setError(JSON.stringify(e.error));
    setIsRecording(false);
  };

  const startRecognizing = async () => {
    const hasPermission = await requestMicrophonePermission();
    if (!hasPermission) return;

    try {
      await Voice.start('en-US'); // Specify language (e.g., 'en-US', 'es-ES')
    } catch (e) {
      console.error(e);
      setError(`Failed to start recognition: ${e.message}`);
    }
  };

  const stopRecognizing = async () => {
    try {
      await Voice.stop();
    } catch (e) {
      console.error(e);
      setError(`Failed to stop recognition: ${e.message}`);
    }
  };

  const cancelRecognizing = async () => {
    try {
      await Voice.cancel();
    } catch (e) {
      console.error(e);
      setError(`Failed to cancel recognition: ${e.message}`);
    }
  };

  const clearResults = () => {
    setRecognizedText('');
    setError('');
  };

  return (
    <View style={styles.container}>
      <Text style={styles.title}>Speech-to-Text</Text>

      <Text style={styles.instructions}>
        Press "Start Recording" and speak.
      </Text>

      <View style={styles.buttonRow}>
        {!isRecording ? (
          <Button title="Start Recording" onPress={startRecognizing} />
        ) : (
          <Button title="Stop Recording" onPress={stopRecognizing} color="red" />
        )}
        <Button title="Cancel" onPress={cancelRecognizing} color="orange" disabled={!isRecording} />
        <Button title="Clear" onPress={clearResults} disabled={isRecording || recognizedText === ''} />
      </View>

      {isRecording && <ActivityIndicator size="large" color="#0000ff" style={styles.spinner} />}

      {error ? (
        <Text style={styles.errorText}>Error: {error}</Text>
      ) : (
        <Text style={styles.recognizedText}>
          {recognizedText === '' ? 'Speak something...' : recognizedText}
        </Text>
      )}

      <Text style={styles.noteText}>
        (Requires microphone permission. Based on native speech recognition APIs.)
      </Text>
    </View>
  );
};

const styles = StyleSheet.create({
  container: {
    flex: 1,
    justifyContent: 'center',
    alignItems: 'center',
    padding: 20,
    backgroundColor: '#fff3e0',
  },
  title: {
    fontSize: 24,
    fontWeight: 'bold',
    marginBottom: 20,
    color: '#ff6f00',
  },
  instructions: {
    fontSize: 16,
    textAlign: 'center',
    marginBottom: 30,
    color: '#bf360c',
  },
  buttonRow: {
    flexDirection: 'row',
    justifyContent: 'space-around',
    width: '90%',
    marginBottom: 20,
  },
  spinner: {
    marginTop: 20,
  },
  recognizedText: {
    fontSize: 20,
    fontWeight: '500',
    marginTop: 30,
    textAlign: 'center',
    color: '#4e342e',
    minHeight: 80, // Ensure space for text
    paddingHorizontal: 10,
  },
  errorText: {
    fontSize: 16,
    color: 'red',
    marginTop: 30,
    textAlign: 'center',
  },
  noteText: {
    fontSize: 12,
    color: '#777',
    marginTop: 40,
    textAlign: 'center',
  },
});

export default SpeechToText;

To run this example:

  1. npm install @react-native-voice/voice
  2. iOS:
    • Add NSSpeechRecognitionUsageDescription and NSMicrophoneUsageDescription to your Info.plist (explaining why your app needs speech recognition and microphone access).
  3. Android:
    • Add <uses-permission android:name="android.permission.RECORD_AUDIO" /> to AndroidManifest.xml.
    • Request runtime permission for RECORD_AUDIO (handled in the example).
  4. Import and use SpeechToText in your App.js.

5.4 Ethical Considerations and Best Practices for AI

Practical Example: A PrivacyPolicy.md snippet for AI Features

Transparency and user consent are paramount. A clear privacy policy helps.

# Privacy Policy - My AI Assistant App

**Last Updated:** August 24, 2025

Your privacy is important to us. This Privacy Policy explains how [Your Company Name] ("we," "us," or "our") collects, uses, and shares information when you use our My AI Assistant mobile application ("the App") and its integrated artificial intelligence features.

## 1. Information We Collect

### Information You Provide Directly:
*   **Speech Input:** When you use our voice command feature, we collect your spoken input. This audio is processed to convert speech to text (Speech-to-Text) and to understand your commands (Natural Language Processing).
*   **Text Input:** Any text you type into the App for AI interactions (e.g., questions, search queries).
*   **Image/Video Input:** If you use image analysis features (e.g., object recognition, text detection), we collect images/videos you upload or capture.
*   **User Preferences:** Settings related to AI features, such as preferred voice, language, or notification preferences.

### Automatically Collected Information (for AI Improvement & Functionality):
*   **Usage Data:** Information about how you interact with AI features, such as frequency of use, types of commands, and feature engagement. This helps us improve the AI's accuracy and performance.
*   **Device Information:** Device model, operating system, unique device identifiers (for on-device ML model compatibility and performance monitoring).
*   **Anonymous Telemetry:** Non-identifiable data related to ML model inference times, error rates, and resource consumption (e.g., battery, CPU) for performance optimization.

## 2. How We Use Your Information (and Specifically for AI)

We use the collected information for the following purposes:
*   **To Provide AI Services:** To process your requests, respond to your queries, perform image analysis, and execute voice commands.
*   **To Improve AI Models:** We use anonymized and aggregated data to train and refine our machine learning models, enhancing their accuracy, understanding, and performance. This includes:
    *   **Transcript Review:** Human reviewers may analyze anonymized samples of voice or text input to correct errors in transcription or interpretation.
    *   **Feature Enhancement:** Analyzing usage patterns to develop new AI features.
*   **Personalization:** To tailor AI responses and features to your individual preferences where applicable (e.g., remembering context for ongoing conversations).
*   **Analytics and Performance:** To monitor the performance of our AI features, identify bugs, and optimize resource usage.

## 3. How We Share Your Information (AI Specific)

*   **Cloud AI Providers:** For advanced AI features (e.g., complex natural language understanding, high-accuracy speech-to-text, large-scale image recognition), we may send anonymized or de-identified portions of your input to third-party cloud AI providers (e.g., Google Cloud AI, AWS AI Services) for processing. These providers are contractually bound to protect your data and only use it for the purposes of providing their services to us.
*   **Anonymized/Aggregated Data:** We may share aggregated or de-identified information that cannot reasonably be used to identify you with partners for research, analytics, or marketing.

## 4. Your Choices and Controls

*   **Microphone/Camera Access:** You can revoke microphone or camera access at any time through your device settings. Be aware that this will disable AI features requiring these permissions.
*   **Opt-out of AI Data Collection (Limited):** You may have options within the App settings to limit certain types of data collection used for AI model improvement. Please note that opting out of essential data collection may impair the functionality of some AI features.
*   **Data Deletion:** If you wish to request deletion of your personal data related to AI interactions, please contact us at [your support email].

---

**Key ethical considerations addressed:**
*   **Data Minimization:** Only collect necessary data.
*   **Transparency:** Clearly explain data collection and usage for AI.
*   **User Control:** Provide options for permissions and data usage.
*   **Anonymization:** Prioritize anonymizing data for model training.
*   **Third-Party Sharing:** Disclose if and how data is shared with cloud AI providers.

6. Conclusion and Next Steps

This practical guide has equipped you with hands-on examples for advanced React Native development. By working through these, you’ve gained practical experience with:

  • The New Architecture: Understanding JSI, implementing TurboModules, and the role of Codegen and Fabric.
  • Advanced State Management: Building robust application logic with XState, including context, actions, guards, and persistence.
  • Performance Benchmarking: Identifying bottlenecks using built-in tools and optimizing lists with getItemLayout and FlashList.
  • CI/CD Automation: Setting up basic GitHub Actions workflows and understanding how tools like EAS and Fastlane integrate.
  • AI/ML Integration: Implementing on-device image classification with TF.js and understanding the client-side interaction with cloud-based AI.

To continue your advanced learning, keep these principles in mind:

  • Build, Break, Fix, Repeat: Hands-on experience is the best teacher. Don’t be afraid to experiment and debug complex scenarios.
  • Stay Updated: The React Native ecosystem evolves rapidly. Regularly consult official documentation, blogs from core contributors (like Callstack), and community discussions.
  • Deepen Native Knowledge: For truly advanced scenarios, a solid understanding of Swift/Kotlin/Objective-C, Xcode, and Android Studio will be invaluable.
  • Master Problem Solving: The ability to dissect complex issues, profile performance, and architect scalable solutions is the hallmark of an expert.

By embracing these challenges and continuously expanding your toolkit, you will solidify your expertise in React Native and build exceptional mobile applications.