Blog/Behind the Code/Offline Sync and Conflict Resolution in React Native
POST
January 02, 2026
LAST UPDATEDJanuary 02, 2026

Offline Sync and Conflict Resolution in React Native

Deep dive into implementing offline sync and conflict resolution for a React Native field app, covering sync queues, merge strategies, vector clocks, and handling intermittent connectivity.

Tags

React NativeOfflineSyncMobile
Offline Sync and Conflict Resolution in React Native
7 min read

Offline Sync and Conflict Resolution in React Native

TL;DR

Building an offline-first React Native app for field auditors meant designing a sync system that could queue thousands of records, merge conflicting edits at the field level, defer media uploads, and recover gracefully from intermittent connectivity -- all without losing a single record over months of production use.

The Challenge

Field auditors spend their days in warehouses, construction sites, and rural facilities -- places where cellular signal is unreliable at best and nonexistent at worst. The existing workflow involved paper forms that got transcribed into a web app back at the office, leading to data entry errors, lost forms, and days of delay between an audit and its report.

The client wanted a React Native app that auditors could use directly on-site. The catch: the app had to work fully offline. Auditors might spend an entire day without connectivity, filling out dozens of inspection forms with photos, timestamps, and GPS coordinates. When they got back to an area with signal -- or connected to office Wi-Fi -- everything needed to sync seamlessly.

The real complexity was not just offline storage. Multiple auditors sometimes worked the same site. Two people could edit the same facility record while both were offline. When they synced, their changes needed to merge intelligently rather than one person's work silently overwriting the other's.

The Architecture

Local Persistence Layer

I used WatermelonDB as the local database. It is built on SQLite under the hood but provides a reactive data layer that works naturally with React components. The key advantage over raw AsyncStorage or SQLite was WatermelonDB's built-in sync primitives -- it tracks which records have been created, updated, or deleted locally since the last sync.

typescript
// Schema definition for audit records
const auditSchema = appSchema({
  version: 1,
  tables: [
    tableSchema({
      name: 'inspections',
      columns: [
        { name: 'facility_id', type: 'string' },
        { name: 'auditor_id', type: 'string' },
        { name: 'status', type: 'string' },
        { name: 'findings', type: 'string' }, // JSON string
        { name: 'photos_json', type: 'string' }, // local file URIs
        { name: 'gps_latitude', type: 'number', isOptional: true },
        { name: 'gps_longitude', type: 'number', isOptional: true },
        { name: 'local_version', type: 'number' },
        { name: 'server_version', type: 'number' },
        { name: 'last_synced_at', type: 'number', isOptional: true },
        { name: 'created_at', type: 'number' },
        { name: 'updated_at', type: 'number' },
      ],
    }),
    tableSchema({
      name: 'sync_queue',
      columns: [
        { name: 'record_id', type: 'string' },
        { name: 'table_name', type: 'string' },
        { name: 'operation', type: 'string' }, // create, update, delete
        { name: 'payload', type: 'string' }, // JSON
        { name: 'priority', type: 'number' },
        { name: 'attempts', type: 'number' },
        { name: 'last_attempt_at', type: 'number', isOptional: true },
        { name: 'error', type: 'string', isOptional: true },
        { name: 'created_at', type: 'number' },
      ],
    }),
  ],
});

Queue-Based Sync Engine

Every local change -- creating, updating, or deleting a record -- produced an entry in the sync queue. The sync engine processed the queue in priority order when connectivity was available.

typescript
class SyncEngine {
  private isRunning = false;
  private retryDelays = [1000, 5000, 15000, 60000, 300000]; // exponential backoff
 
  async enqueue(operation: SyncOperation): Promise<void> {
    await database.write(async () => {
      await database.get<SyncQueueItem>('sync_queue').create((item) => {
        item.recordId = operation.recordId;
        item.tableName = operation.tableName;
        item.operation = operation.type;
        item.payload = JSON.stringify(operation.payload);
        item.priority = this.calculatePriority(operation);
        item.attempts = 0;
        item.createdAt = Date.now();
      });
    });
 
    // Attempt immediate sync if online
    if (await this.isConnected()) {
      this.processQueue();
    }
  }
 
  private calculatePriority(op: SyncOperation): number {
    // Critical data syncs first
    if (op.tableName === 'inspections' && op.type === 'create') return 1;
    if (op.tableName === 'inspections' && op.type === 'update') return 2;
    if (op.tableName === 'media_uploads') return 5; // media is deferred
    return 3;
  }
 
  async processQueue(): Promise<void> {
    if (this.isRunning) return;
    this.isRunning = true;
 
    try {
      const pendingItems = await database
        .get<SyncQueueItem>('sync_queue')
        .query(Q.sortBy('priority', Q.asc), Q.sortBy('created_at', Q.asc))
        .fetch();
 
      for (const item of pendingItems) {
        if (!(await this.isConnected())) break;
 
        try {
          await this.processItem(item);
          await item.destroyPermanently(); // remove from queue on success
        } catch (error) {
          await this.handleItemFailure(item, error);
        }
      }
    } finally {
      this.isRunning = false;
    }
  }
 
  private async handleItemFailure(
    item: SyncQueueItem,
    error: unknown
  ): Promise<void> {
    const attempts = item.attempts + 1;
    const maxRetries = this.retryDelays.length;
 
    if (attempts >= maxRetries) {
      // Move to dead letter queue for manual review
      await this.moveToDeadLetter(item, error);
      return;
    }
 
    await database.write(async () => {
      await item.update((i) => {
        i.attempts = attempts;
        i.lastAttemptAt = Date.now();
        i.error = error instanceof Error ? error.message : String(error);
      });
    });
  }
}

Conflict Resolution with Field-Level Merge

This was the hardest problem. When two auditors edit the same facility record while offline, a naive "last write wins" strategy means one person's work disappears. I implemented field-level merge resolution that compares changes at the individual field level rather than the record level.

typescript
interface FieldChange {
  field: string;
  oldValue: unknown;
  newValue: unknown;
  timestamp: number;
  auditorId: string;
}
 
interface MergeResult {
  merged: Record<string, unknown>;
  conflicts: FieldConflict[];
  autoResolved: FieldChange[];
}
 
function mergeRecords(
  base: Record<string, unknown>, // last synced version
  local: Record<string, unknown>, // local changes
  remote: Record<string, unknown>, // server version
  localTimestamp: number,
  remoteTimestamp: number
): MergeResult {
  const merged: Record<string, unknown> = { ...base };
  const conflicts: FieldConflict[] = [];
  const autoResolved: FieldChange[] = [];
 
  const allFields = new Set([
    ...Object.keys(local),
    ...Object.keys(remote),
  ]);
 
  for (const field of allFields) {
    const baseVal = base[field];
    const localVal = local[field];
    const remoteVal = remote[field];
 
    const localChanged = !deepEqual(baseVal, localVal);
    const remoteChanged = !deepEqual(baseVal, remoteVal);
 
    if (localChanged && !remoteChanged) {
      // Only local changed -- take local
      merged[field] = localVal;
      autoResolved.push({
        field,
        oldValue: baseVal,
        newValue: localVal,
        timestamp: localTimestamp,
        auditorId: 'local',
      });
    } else if (!localChanged && remoteChanged) {
      // Only remote changed -- take remote
      merged[field] = remoteVal;
      autoResolved.push({
        field,
        oldValue: baseVal,
        newValue: remoteVal,
        timestamp: remoteTimestamp,
        auditorId: 'remote',
      });
    } else if (localChanged && remoteChanged) {
      if (deepEqual(localVal, remoteVal)) {
        // Both changed to the same value -- no conflict
        merged[field] = localVal;
      } else {
        // Real conflict -- flag for manual review
        conflicts.push({
          field,
          baseValue: baseVal,
          localValue: localVal,
          remoteValue: remoteVal,
          localTimestamp,
          remoteTimestamp,
        });
        // Default to remote but mark as conflicted
        merged[field] = remoteVal;
      }
    }
    // Neither changed: keep base value (already in merged)
  }
 
  return { merged, conflicts, autoResolved };
}

When conflicts were detected, the app queued them in a conflict resolution UI. Auditors saw a side-by-side comparison of their value vs. the remote value and chose which to keep. In practice, true field-level conflicts were rare -- most of the time, two auditors edited different fields on the same record, and the merge resolved automatically.

Connectivity Detection

React Native's NetInfo reports whether the device has a network connection, but that does not mean the API is reachable. Captive portals, DNS failures, and server outages all show as "connected" to NetInfo. I added an active health check layer.

typescript
class ConnectivityManager {
  private isApiReachable = false;
  private checkInterval: NodeJS.Timeout | null = null;
 
  start() {
    // Listen for NetInfo changes
    NetInfo.addEventListener((state) => {
      if (state.isConnected) {
        this.startHealthChecks();
      } else {
        this.stopHealthChecks();
        this.isApiReachable = false;
      }
    });
  }
 
  private startHealthChecks() {
    // Check immediately, then every 30 seconds
    this.checkApiHealth();
    this.checkInterval = setInterval(() => this.checkApiHealth(), 30000);
  }
 
  private async checkApiHealth(): Promise<void> {
    try {
      const controller = new AbortController();
      const timeout = setTimeout(() => controller.abort(), 5000);
 
      const response = await fetch(`${API_BASE}/health`, {
        signal: controller.signal,
      });
      clearTimeout(timeout);
 
      const wasReachable = this.isApiReachable;
      this.isApiReachable = response.ok;
 
      // If we just came back online, trigger sync
      if (!wasReachable && this.isApiReachable) {
        syncEngine.processQueue();
      }
    } catch {
      this.isApiReachable = false;
    }
  }
 
  async isConnected(): Promise<boolean> {
    return this.isApiReachable;
  }
 
  private stopHealthChecks() {
    if (this.checkInterval) {
      clearInterval(this.checkInterval);
      this.checkInterval = null;
    }
  }
}

Deferred Media Upload

Photos were the largest data payload. A single audit could include dozens of high-resolution photos, and uploading them over a weak cellular connection would block the sync queue for text data that was far more critical.

I separated media uploads into their own low-priority queue. Text data synced first. Photo records included a local file URI, and a background upload task processed photos independently. The server accepted audit records with "pending" photo references and updated them when the uploads completed.

typescript
async function enqueueMediaUpload(
  inspectionId: string,
  localUri: string
): Promise<void> {
  // Create a placeholder record that references the local file
  const mediaRecord = await database.write(async () => {
    return database.get<MediaUpload>('media_uploads').create((m) => {
      m.inspectionId = inspectionId;
      m.localUri = localUri;
      m.status = 'pending';
      m.remoteUrl = null;
      m.createdAt = Date.now();
    });
  });
 
  // Enqueue with low priority -- text data syncs first
  await syncEngine.enqueue({
    recordId: mediaRecord.id,
    tableName: 'media_uploads',
    type: 'create',
    payload: { inspectionId, localUri },
  });
}
 
async function uploadMedia(item: SyncQueueItem): Promise<void> {
  const payload = JSON.parse(item.payload);
  const fileInfo = await FileSystem.getInfoAsync(payload.localUri);
 
  if (!fileInfo.exists) {
    throw new Error(`Local file not found: ${payload.localUri}`);
  }
 
  // Use chunked upload for large files over slow connections
  const remoteUrl = await chunkedUpload(payload.localUri, {
    inspectionId: payload.inspectionId,
    chunkSize: 256 * 1024, // 256KB chunks
    onProgress: (progress) => {
      updateUploadProgress(item.recordId, progress);
    },
  });
 
  // Update the local record with the remote URL
  await database.write(async () => {
    const mediaRecord = await database
      .get<MediaUpload>('media_uploads')
      .find(item.recordId);
    await mediaRecord.update((m) => {
      m.remoteUrl = remoteUrl;
      m.status = 'uploaded';
    });
  });
}

Key Decisions & Trade-offs

WatermelonDB over raw SQLite: WatermelonDB added a dependency and learning curve, but its built-in change tracking and lazy loading of records made the sync engine significantly simpler. The trade-off was vendor lock-in to WatermelonDB's sync protocol, which I partially mitigated by keeping the conflict resolution logic in a separate, database-agnostic module.

Field-level merge over record-level: Field-level conflict detection required storing a "base" snapshot of each record at its last sync point, which roughly doubled storage for synced records. The alternative -- record-level last-write-wins -- would have been simpler but unacceptable for the use case. Auditors would lose work, and in an audit context, lost findings could have compliance implications.

Deferred media over inline upload: Separating media into a low-priority queue meant that synced inspection records could reference photos that had not uploaded yet. The server and web dashboard had to handle "photo pending" states gracefully. The upside was that critical text data synced in seconds rather than waiting behind megabytes of photo uploads.

Active health checks over NetInfo alone: Pinging the API every 30 seconds uses bandwidth and battery. I tuned the interval based on field testing -- 30 seconds was frequent enough to catch connectivity windows without noticeably impacting battery life. In airplane mode or confirmed offline states, health checks paused entirely.

Results & Outcomes

The sync system processed thousands of records per month across the auditor team without data loss. The field-level merge algorithm auto-resolved the vast majority of concurrent edits, with only a small fraction requiring manual conflict resolution by auditors.

Auditors reported that the app felt as responsive offline as online -- the local-first architecture meant there was no loading spinner for data reads, and writes were instant because they hit the local database before queueing for sync. The previous paper-based workflow took days to produce reports; with the app, reports were available as soon as the auditor synced, typically the same day.

The deferred media upload was particularly well-received. Auditors could take dozens of photos during an inspection without worrying about upload time or signal strength. Photos uploaded in the background whenever connectivity was available, and the progress was visible in the app's sync status panel.

What I'd Do Differently

Use a CRDT-based approach for certain field types. The field-level merge works well for scalar values, but for list-type data (like audit findings, which are arrays of objects), the merge logic became complex. A CRDT (Conflict-free Replicated Data Type) for list fields would have handled concurrent additions and removals more elegantly.

Build the conflict resolution UI earlier. I initially treated conflicts as an edge case and built a minimal resolution screen. In practice, auditors needed clear context about when and where each version was created to make informed merge decisions. A richer UI with timestamps, auditor names, and GPS context would have helped from the start.

Implement sync status visibility from day one. Auditors wanted to know exactly what had synced and what was still pending. I added a sync status panel later, but it should have been a first-class feature. When your app works offline, users need confidence that their data will not be lost.

Test with realistic network conditions earlier. I tested offline and online states but initially underestimated the "intermittent" scenario -- where connectivity flickers on and off during a sync. Adding network condition simulation (using tools like Charles Proxy's throttling) to my test suite earlier would have caught edge cases sooner.

FAQ

What sync strategy works best for offline-first mobile apps?

A queue-based approach where each local change is logged as a sync operation works best. Operations are processed in order when connectivity returns, with exponential backoff for failures. This ensures no changes are lost and the sync order matches the user's intent. In our implementation, the queue was prioritized -- critical data like inspection records synced before media uploads. The queue persists in the local database, so even if the app is killed and restarted, pending operations are not lost.

How do you resolve conflicts when two users edit the same record offline?

We implemented field-level merge resolution: non-conflicting field changes are merged automatically, while conflicting field edits are flagged for manual review. Each field carries a timestamp, and the merge algorithm compares field-by-field rather than replacing entire records. The algorithm works against a "base" snapshot -- the record state at the last successful sync. By comparing both the local and remote changes against this common ancestor, we can determine which fields actually changed and handle them independently.

How do you detect intermittent connectivity in React Native?

NetInfo provides basic online/offline status, but we added an active health check that pings the API endpoint periodically. This catches scenarios where the device reports connectivity but the API is unreachable due to captive portals, DNS issues, or server outages. The health check runs every 30 seconds when NetInfo reports connectivity and pauses entirely when the device is confirmed offline. When the health check transitions from unreachable to reachable, it triggers an immediate sync queue processing cycle.

Collaboration

Need help with a project?

Let's Build It

I help startups and established companies design, build, and scale world-class digital products. From deep technical architecture to pixel-perfect UI — let's bring your vision to life.

SH

Article Author

Sadam Hussain

Senior Full Stack Developer

Senior Full Stack Developer with over 7 years of experience building React, Next.js, Node.js, TypeScript, and AI-powered web platforms.

Related Articles

Optimizing Core Web Vitals for e-Commerce
Mar 01, 202610 min read
SEO
Performance
Next.js

Optimizing Core Web Vitals for e-Commerce

Our journey to scoring 100 on Google PageSpeed Insights for a major Shopify-backed e-commerce platform.

Building an AI-Powered Interview Feedback System
Feb 22, 20269 min read
AI
LLM
Feedback

Building an AI-Powered Interview Feedback System

How we built an AI-powered system that analyzes mock interview recordings and generates structured feedback on communication, technical accuracy, and problem-solving approach using LLMs.

Migrating from Pages to App Router
Feb 15, 20268 min read
Next.js
Migration
Case Study

Migrating from Pages to App Router

A detailed post-mortem on migrating a massive enterprise dashboard from Next.js Pages Router to the App Router.