PROJECT
Full Stack Developer

Cyber Mart

Cyber Mart is a production-focused software project built by Sadam Hussain using Next.js, TypeScript, Node.js, and related technologies.

A full-stack multi-vendor e-commerce marketplace built on Next.js and Node.js/Express.js, delivering high-performance shopping experiences through strategic server-side rendering, optimized MySQL schemas, and cloud-hosted CI/CD deployment pipelines. The platform supports thousands of product listings across multiple vendors, integrating third-party payment processors and inventory providers via versioned REST APIs. A polyglot data architecture combines MySQL for transactional integrity (orders, inventory) with MongoDB for flexible content storage (product descriptions, vendor profiles), deployed on AWS with Docker-based containerization.

Tech Stack

Next.jsTypeScriptNode.jsExpress.jsMySQLMongoDBREST APIsMaterial UITailwind CSSReact Hook FormSWRJestAWSDockerCI/CD
Cyber Mart 1
Cyber Mart 2
Cyber Mart 3

Status

Production Ready

Type

Enterprise platform

Last Updated

April 18, 2026

Developed the marketplace frontend with Next.js, implementing a per-route rendering strategy (SSR for product pages, ISR for category listings, CSR for cart/checkout) to optimize both performance and SEO across different page types.

Built a modular Node.js/Express.js backend with layered middleware architecture—authentication, request validation, rate limiting, error handling—supporting high-traffic product browsing and transactional checkout workflows.

Designed a polyglot database architecture combining MySQL (for orders, inventory, and vendor relationships requiring ACID compliance) with MongoDB (for flexible product content and vendor profiles), choosing each database for its strengths rather than forcing one to do everything.

Optimized MySQL query performance through strategic composite indexing, query plan analysis, and schema normalization, targeting the highest-frequency queries (product search, inventory checks, order history) for measurable improvement.

Integrated third-party payment processors and inventory management providers via secure REST API endpoints with versioned contracts, webhook-based event handling, and idempotency keys for financial transaction safety.

Created responsive UI components with Material UI and Tailwind CSS, building a checkout flow with progressive disclosure to reduce cognitive load and cart abandonment.

Implemented comprehensive form validation using React Hook Form with schema-based validation, providing real-time inline feedback and reducing form submission errors across registration, checkout, and vendor onboarding flows.

Integrated SWR for client-side data fetching with stale-while-revalidate caching, enabling real-time inventory and pricing updates without full page reloads during active shopping sessions.

Configured CI/CD pipelines with automated Jest test suites, Docker image builds, and staged AWS deployments, ensuring repeatable, zero-downtime production releases.

Implemented structured error handling and logging across the full stack, with middleware-level error categorization (validation errors, upstream service failures, database errors) for systematic root-cause analysis in production.

Cyber Mart

Overview

Cyber Mart is a high-performance, multi-vendor e-commerce marketplace platform built on the modern full-stack (Next.js & Node.js). The platform's core technical differentiator is its polyglot data architecture—rather than forcing all data into a single database, the system uses MySQL for transactional data requiring ACID guarantees (orders, inventory, payments) and MongoDB for flexible content where schema evolution is frequent (product descriptions, vendor profiles, search metadata).

Designed for high-traffic production workloads, the platform enables hundreds of merchants to manage inventory while providing shoppers with a low-latency, SEO-optimized discovery and checkout journey. The architecture treats performance as a full-stack concern—from database indexing strategies through API response shaping to rendering strategy selection on the frontend.

Business Context: Full-Stack E-Commerce Excellence

The e-commerce landscape requires a delicate balance between visual richness and technical speed. Every additional millisecond of load time directly impacts conversion rates, and every inconsistency in inventory data erodes user trust.

The core problem: Multi-vendor marketplaces face a unique tension—vendor catalogs grow unpredictably in both size and schema complexity (different product types have different attributes), but transactional workflows (cart, checkout, payment) demand strict consistency. A single-database approach either sacrifices flexibility for transactions or transactions for flexibility.

Stakeholder perspective:

  • Shoppers need fast discovery across thousands of SKUs, accurate real-time inventory/pricing, and a frictionless checkout experience.
  • Vendors need flexible product listing capabilities, reliable order fulfillment data, and visibility into their storefront performance.
  • Operations need data integrity across payments, inventory, and fulfillment—no overselling, no payment mismatches, no lost orders.

Constraints that shaped the architecture:

  • Product catalogs vary wildly across vendors—electronics, apparel, home goods—each with different attribute schemas that would require constant MySQL ALTER TABLE operations in a rigid relational model.
  • Payment and inventory operations require strict ACID compliance—eventual consistency is unacceptable for financial transactions.
  • SEO is a primary acquisition channel, requiring server-rendered product pages with structured data markup.
  • Third-party integrations (payment processors, inventory providers) have different reliability characteristics and response times, requiring resilient integration patterns.

What I Built

1. High-Performance Storefront with Strategic Rendering

  • Per-Route Rendering Strategy: Rather than defaulting everything to SSR, implemented a deliberate rendering strategy based on each page's requirements. Product detail pages use SSR for SEO and social sharing. Category pages use ISR with time-based revalidation since inventory changes don't need sub-minute freshness in listing views. Cart and checkout flows use client-side rendering where interactivity is the priority and SEO is irrelevant.
  • SWR-Powered Real-Time State: Integrated SWR for client-side data fetching with stale-while-revalidate semantics, ensuring that inventory counts and prices displayed to the shopper reflect current state without requiring full page reloads. SWR's deduplication prevents redundant API calls when multiple components on the same page need the same data.
  • Progressive Checkout Flow: Built a multi-step checkout with progressive disclosure—shipping, payment, and review steps load sequentially with validated transitions, reducing the cognitive load of showing all fields at once and allowing focused validation at each step.

2. Scalable Backend & API Architecture

  • Layered Express.js Middleware: Architected the backend with clearly separated middleware layers: authentication and session validation → request schema validation (Zod) → rate limiting → business logic → error handling. Each layer has a single responsibility, making it straightforward to add cross-cutting concerns without modifying route handlers.
  • Versioned REST API Design: Implemented URL-based API versioning (/api/v1/, /api/v2/) with clear deprecation policies. Each version maintains its own request/response schemas, and breaking changes are introduced in new versions while maintaining backward compatibility for existing integrations.
  • Third-Party Integration Resilience: Built integration wrappers for payment processors and inventory providers with circuit breaker patterns, retry logic with exponential backoff, and webhook handlers with idempotency keys. Payment operations use idempotency keys to prevent double-charging when network issues cause retried requests.

3. Polyglot Data Architecture

  • MySQL for Transactional Data: Orders, inventory, payments, and vendor financial records live in MySQL with carefully designed schemas that enforce referential integrity. Composite indexes are designed around the highest-frequency query patterns (product search by category + price range, order lookup by vendor + date range, inventory checks by SKU).
  • MongoDB for Content & Catalog Flexibility: Product descriptions, vendor profiles, and product attribute schemas live in MongoDB, leveraging its document model for heterogeneous product types. An electronics listing has entirely different attributes (RAM, storage, screen size) than an apparel listing (size, color, material)—MongoDB handles this naturally without schema migration overhead.
  • Cross-Database Consistency: Product identifiers are shared between MySQL and MongoDB, with MySQL as the source of truth for inventory and pricing, and MongoDB as the source of truth for content. The API layer joins data from both stores transparently, and writes to inventory/pricing always go through MySQL transactions.

4. Form Architecture & Data Integrity

  • Schema-Driven Validation: All user input—shopper registration, checkout, vendor onboarding, product listing creation—flows through React Hook Form with Zod schemas. Validation schemas are shared between frontend and backend, ensuring that what the client validates matches what the API enforces.
  • Real-Time Feedback: Form fields validate on blur and on change (after first submission attempt), providing immediate visual feedback. Error messages are specific and actionable ("Enter a valid 16-digit card number" not "Invalid input").
  • Vendor Onboarding Flows: Built multi-step vendor registration with document upload, business verification fields, and progressive form completion that saves draft state, allowing vendors to complete registration across multiple sessions.

5. Deployment & Operations

  • Docker-Based Deployments: The full stack (Next.js frontend, Express.js API, MySQL, MongoDB) runs in Docker containers with Docker Compose for local development parity and individual container images for production deployment on AWS.
  • CI/CD Pipeline: GitHub Actions runs linting, type checking, unit tests (Jest), and integration tests on every PR. Successful merges trigger Docker image builds and staged deployments—staging environment for validation, then production with health checks before traffic routing.
  • Structured Logging & Error Categorization: Implemented middleware-level error categorization that distinguishes between client errors (validation failures), upstream errors (payment provider timeouts), and system errors (database connection issues). Each category has different alerting thresholds and response strategies.

Architecture Highlights

Polyglot Data Strategy: Why Two Databases

The decision to use both MySQL and MongoDB wasn't about using trendy technology—it was about matching data characteristics to database strengths.

MySQL handles transactional data: Orders, payments, and inventory require ACID transactions. When a shopper completes checkout, the system must atomically decrement inventory, create an order record, and record the payment—all within a single transaction. If any step fails, everything rolls back. This is MySQL's core strength.

MongoDB handles catalog content: Product listings from different vendors have wildly different attribute schemas. A laptop listing includes specs (CPU, RAM, storage, display) while a t-shirt listing includes fit, fabric, and color options. In MySQL, this would require either an extremely wide table (mostly NULL columns) or an EAV pattern that makes queries complex and slow. MongoDB's document model stores each product's attributes naturally as nested JSON, and schema validation at the application layer ensures data quality without rigid database constraints.

The join layer: The API layer handles cross-database reads by fetching transactional data from MySQL and content data from MongoDB, merging them into unified response objects. Product IDs serve as the correlation key. This approach keeps each database focused on what it does best.

Express.js Middleware Architecture

The backend follows a strict middleware pipeline where each layer has a single responsibility:

  1. CORS & Security Headers — Applied globally, configures allowed origins and security-relevant headers.
  2. Authentication — Validates session tokens, attaches user context to the request.
  3. Request Validation — Zod schemas validate request body, params, and query strings before the route handler sees the request.
  4. Rate Limiting — Per-endpoint rate limits prevent abuse of expensive operations (search, checkout).
  5. Route Handler — Business logic, now guaranteed to receive validated input with authenticated context.
  6. Error Handler — Catches all errors, categorizes them, logs with appropriate severity, and returns structured error responses.

This pipeline means route handlers focus purely on business logic—they never handle auth, validation, or error formatting directly.

Performance as a Full-Stack Concern

Performance optimization in Cyber Mart spans every layer:

  • Database: Composite indexes designed for actual query patterns (not theoretical ones), analyzed with EXPLAIN to verify index usage.
  • API: Response shaping ensures the API returns exactly the fields the frontend needs—no over-fetching that wastes bandwidth and parsing time.
  • Rendering: Per-route rendering strategy means each page type uses the approach that best serves its performance requirements.
  • Client: SWR's stale-while-revalidate pattern shows cached data immediately while refreshing in the background, making the UI feel instant even when data is being updated.

The most impactful technical challenge was optimizing MySQL query performance for product search across a growing multi-vendor catalog.

The Problem: Product search queries filter across multiple dimensions simultaneously—category, price range, vendor, availability, ratings. The initial schema used single-column indexes, which forced MySQL to choose one index per query and scan the remaining filters row-by-row. As the catalog grew, search response times degraded noticeably.

The Analysis Process: Rather than guessing at index strategies, I analyzed actual query patterns from application logs—which filter combinations were most common, which queries took the longest, and which pages had the most traffic. The 80/20 pattern was clear: category + price range filtering accounted for the majority of search queries, followed by vendor-specific listing views.

The Optimization Approach:

  1. Composite Indexes Aligned to Query Patterns: Created composite indexes that match the most frequent WHERE clause combinations. A composite index on (category_id, price, availability_status) serves the primary search pattern far more efficiently than three separate single-column indexes, because MySQL can use the entire index for a "category X, price between Y and Z, in stock" query in a single B-tree traversal.

  2. Covering Indexes for High-Frequency Reads: For the product listing view (which only needs title, price, thumbnail, vendor name), created covering indexes that include all needed columns. This allows MySQL to serve the query entirely from the index without touching the table data pages—eliminating random disk reads.

  3. Schema Normalization for Write Integrity, Denormalization for Read Performance: Kept the canonical schema normalized for write consistency, but added carefully managed denormalized columns (like cached vendor name on the product table) for read-heavy views where the join cost was measurable. Updates to the denormalized columns are handled through application-level triggers on vendor profile updates.

  4. Query Plan Verification: Every index change was validated with EXPLAIN ANALYZE to confirm that MySQL's query planner actually uses the new index and that the estimated row scan count dropped. Indexes that the planner ignored (due to low cardinality or unfavorable statistics) were removed to avoid write overhead.

Why this matters: Database performance optimization is often treated as DBA work separate from application development. In practice, the most impactful optimizations come from understanding how the application actually queries data—what the frontend needs, how the API shapes requests, and what the typical user journey looks like—then designing indexes and schemas that serve those patterns specifically.

Key Challenges & Solutions

  • Challenge: Product catalogs from different vendors had fundamentally different attribute schemas, making a single relational model either too rigid or too sparse. Approach: Adopted a polyglot data architecture—MySQL for transactional data requiring ACID compliance (orders, payments, inventory), MongoDB for flexible product content where schema evolution is frequent and attributes vary by product type. Why: Forcing heterogeneous product attributes into a relational schema leads to either wide tables with mostly NULL columns or EAV patterns that make queries complex and slow. MongoDB's document model handles schema flexibility naturally, while MySQL's transaction support is non-negotiable for financial data.

  • Challenge: Inventory and pricing data displayed to shoppers could become stale between page load and checkout, leading to frustration (item shows "in stock" but fails at checkout). Approach: Implemented SWR with short revalidation intervals for inventory-critical data, and added a final stock check within the checkout transaction that revalidates availability before processing payment. Why: True real-time inventory display (WebSocket-pushed updates) would be over-engineered for an e-commerce catalog. SWR's stale-while-revalidate pattern provides "fresh enough" data for browsing, while the transactional check at checkout ensures correctness where it matters most.

  • Challenge: Third-party payment processor and inventory provider APIs had different reliability characteristics—timeout rates, error formats, retry semantics. Approach: Built provider-specific adapter layers behind a common interface, with per-provider circuit breakers, retry configurations, and error normalization. Payment operations include idempotency keys to prevent double-processing on retries. Why: Wrapping all third-party APIs in the same generic error handler ignores that a payment timeout requires different handling than an inventory sync failure. Provider-specific adapters can implement the nuanced retry and fallback logic each integration requires.

  • Challenge: Search response times degraded as the product catalog grew, with complex filter queries scanning more rows than acceptable. Approach: Analyzed actual query patterns from application logs, designed composite indexes aligned to the highest-frequency filter combinations, and verified each optimization with EXPLAIN ANALYZE to confirm index usage. Why: Index optimization based on theoretical query patterns often misses. The most impactful optimizations come from instrumenting actual usage and targeting the queries that real users execute most frequently.

Outcomes

  • Search Performance: MySQL query optimization through composite indexing and schema redesign produced measurable improvements in product search and listing response times, directly impacting the shopping experience during high-traffic periods.
  • Data Integrity: The polyglot architecture maintained strict transactional consistency for financial operations (zero payment mismatches, zero overselling) while providing the schema flexibility vendors need for diverse product catalogs.
  • SEO Performance: Maintained strong Lighthouse performance scores across all critical storefront and category pages through the strategic per-route rendering approach.
  • Form Reliability: Schema-driven validation with React Hook Form and Zod significantly reduced form submission errors across checkout, registration, and vendor onboarding flows.
  • Deployment Confidence: The CI/CD pipeline with automated testing and staged Docker deployments enabled reliable, repeatable production releases with minimal manual intervention.

Engineering Takeaways

Cyber Mart underscored the critical intersection between data modeling and user experience. In e-commerce, the strongest gains don't come from the UI alone—they come from solving the full-stack problem: from how the database is indexed, to how the API shapes responses, to how the frontend renders and caches data. Performance is additive across layers, and optimizing any single layer in isolation misses the compounding effect.

Patterns I'd reuse:

  • The polyglot data strategy (relational for transactions, document store for flexible content) with clear ownership boundaries. This pattern applies to any domain where data has fundamentally different consistency and flexibility requirements.
  • Schema-driven validation shared between frontend and backend. Defining validation once and using it everywhere eliminates the class of bugs where client-side and server-side validation disagree.
  • Per-route rendering strategy in Next.js. The discipline of choosing SSR/ISR/CSR based on each page's actual requirements (SEO importance, data freshness needs, interactivity level) rather than defaulting everything to one approach.

What I'd reconsider:

  • The cross-database join layer adds complexity to data fetching. For simpler product catalogs, a single PostgreSQL database with JSONB columns for flexible attributes might provide sufficient flexibility without the operational overhead of two databases.
  • SWR is excellent for read-heavy caching but doesn't provide a strong pattern for optimistic mutations. A future iteration might benefit from a more opinionated server state library for write-heavy vendor management flows.

Trade-offs acknowledged:

  • Chose two databases (MySQL + MongoDB) over one, accepting operational complexity in exchange for using each database where it naturally fits.
  • Chose SSR for product pages (server cost per request) over static generation (build-time cost), accepting higher server load in exchange for always-fresh product data without redeployment.
  • Chose React Hook Form + Zod over a custom form solution, accepting the learning curve for team members in exchange for type-safe, schema-driven validation that reduces bug surface area.