Performance Optimization for an Image-Heavy Rental Platform
How we optimized Core Web Vitals for an image-heavy rental platform, reducing LCP from 4.2s to 1.8s through responsive images, lazy loading, blur placeholders, and CDN-based image transformation.
Tags
Performance Optimization for an Image-Heavy Rental Platform
TL;DR
I optimized the performance of Chalet Retreat, a property rental platform with tens of thousands of property photos, where page load times were frustrating users and hurting search rankings. By implementing the Next.js Image component with responsive srcsets, lazy loading with blur placeholders, CDN-based image transformation, and tackling Google Maps rendering performance, I brought Core Web Vitals into the green across the site. The result was a noticeably faster experience that improved both user engagement and SEO visibility.
The Challenge
Chalet Retreat is a vacation rental platform where high-quality photography is the primary selling tool. Every property listing has 15-30 photos — hero images, room shots, amenity details, neighborhood views. The search results page alone could display 20+ property cards, each with a cover image. The property detail page loaded a full gallery. And the map view rendered dozens of markers with image previews on hover.
When I inherited the codebase, the performance situation was rough. The search results page was loading unoptimized JPEG files at their original resolution — often 4000x3000 pixel images rendered in 400x300 containers. The property detail page loaded the entire gallery eagerly, regardless of whether the user scrolled to see it. Google Maps was initializing with a full marker cluster on page load, blocking the main thread.
The impact was tangible. Users on mobile connections waited several seconds for the search page to become usable. Google's Core Web Vitals assessment flagged the site for poor LCP and CLS. The client reported that their search rankings had been declining, and competitors with faster sites were capturing traffic they used to own.
The constraint was that I could not simply reduce image quality or remove photos. For a rental platform, image quality is directly tied to booking conversions. The optimization had to be invisible — same visual quality, dramatically faster delivery.
The Architecture
Next.js Image Component and Responsive Images
The first and highest-impact change was replacing raw <img> tags with the Next.js Image component. But this was not a simple find-and-replace — the existing images had no consistent aspect ratios, and many were used in CSS Grid layouts that required specific sizing behavior.
// Before: Raw img tag, full-resolution image
<img src={property.coverImage} alt={property.title} className="cover-image" />
// After: Next.js Image with responsive sizing
<Image
src={property.coverImage}
alt={property.title}
width={800}
height={600}
sizes="(max-width: 640px) 100vw, (max-width: 1024px) 50vw, 33vw"
quality={75}
placeholder="blur"
blurDataURL={property.coverImageBlur}
className="cover-image"
/>The sizes attribute is critical and often overlooked. Without it, the browser assumes the image will be displayed at the full viewport width and downloads the largest srcset variant. By specifying that the image is 33% of the viewport on desktop, 50% on tablet, and 100% on mobile, the browser selects the appropriate size — often downloading an image one-third the size of what it would have chosen by default.
I configured the Next.js image loader to generate srcsets at specific breakpoints tailored to our layout:
// next.config.js
module.exports = {
images: {
deviceSizes: [640, 750, 828, 1080, 1200, 1920],
imageSizes: [16, 32, 48, 64, 96, 128, 256, 384],
formats: ['image/webp', 'image/avif'],
minimumCacheTTL: 60 * 60 * 24 * 30, // 30 days
},
};Enabling AVIF alongside WebP was a meaningful win. AVIF typically produces files 20-30% smaller than WebP at equivalent visual quality. Browsers that support AVIF get the smallest possible file; others fall back to WebP, then JPEG.
Lazy Loading Strategy
Not all images on a page are equal. The hero image and first row of search results need to load immediately. Everything below the fold can wait until the user scrolls.
function PropertyGrid({ properties }: { properties: Property[] }) {
return (
<div className="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 gap-6">
{properties.map((property, index) => (
<PropertyCard
key={property.id}
property={property}
// Load first 6 images eagerly, rest lazily
priority={index < 6}
/>
))}
</div>
);
}
function PropertyCard({ property, priority }: { property: Property; priority: boolean }) {
return (
<div className="property-card">
<Image
src={property.coverImage}
alt={property.title}
width={800}
height={600}
sizes="(max-width: 640px) 100vw, (max-width: 1024px) 50vw, 33vw"
priority={priority}
loading={priority ? 'eager' : 'lazy'}
placeholder="blur"
blurDataURL={property.blurDataURL}
/>
</div>
);
}The priority prop on the first six images tells Next.js to preload them in the document head, which is essential for LCP. The remaining images use native browser lazy loading, which defers their download until they approach the viewport.
For the property detail gallery, I implemented a more aggressive lazy loading strategy. Only the first three gallery images load initially. The rest load on demand when the user interacts with the gallery navigation:
function PropertyGallery({ images }: { images: GalleryImage[] }) {
const [loadedCount, setLoadedCount] = useState(3);
const handleGalleryScroll = useCallback(() => {
// Load 3 more images when user navigates
setLoadedCount((prev) => Math.min(prev + 3, images.length));
}, [images.length]);
return (
<div className="gallery">
{images.slice(0, loadedCount).map((image, index) => (
<Image
key={image.id}
src={image.url}
alt={image.alt}
width={1200}
height={800}
priority={index === 0}
placeholder="blur"
blurDataURL={image.blurDataURL}
/>
))}
{loadedCount < images.length && (
<button onClick={handleGalleryScroll}>
Load more photos ({images.length - loadedCount} remaining)
</button>
)}
</div>
);
}Blur Placeholders (LQIP)
Blur placeholders solve two problems simultaneously: they eliminate Cumulative Layout Shift (CLS) by reserving the correct dimensions before the image loads, and they provide a smooth perceived loading experience instead of content popping in abruptly.
I generated blur data URLs at upload time using a pipeline that creates a tiny (10x10 pixel) version of each image, encodes it as base64, and stores it alongside the image metadata:
import sharp from 'sharp';
async function generateBlurPlaceholder(imagePath: string): Promise<string> {
const buffer = await sharp(imagePath)
.resize(10, 10, { fit: 'inside' })
.toFormat('png')
.toBuffer();
return `data:image/png;base64,${buffer.toString('base64')}`;
}
// During property image upload
async function processPropertyImage(file: Express.Multer.File) {
const blurDataURL = await generateBlurPlaceholder(file.path);
// Upload original to CDN
const cdnUrl = await uploadToCDN(file.path);
return {
url: cdnUrl,
blurDataURL,
width: metadata.width,
height: metadata.height,
};
}The blur data URL is tiny — typically under 200 bytes — so including it in the API response adds negligible overhead. The visual effect is that users see a blurred preview of the actual image content while the full image loads, which feels significantly faster than a grey placeholder or skeleton.
CDN-Based Image Transformation
Rather than generating and storing multiple sizes of every image at upload time, I configured CDN-based image transformation. The original high-resolution image is stored once, and size/format transformations happen at the CDN edge on first request.
function getCDNImageUrl(
originalUrl: string,
options: { width: number; quality?: number; format?: 'webp' | 'avif' | 'auto' }
): string {
const { width, quality = 75, format = 'auto' } = options;
const url = new URL(originalUrl);
// CDN transformation parameters
url.searchParams.set('w', width.toString());
url.searchParams.set('q', quality.toString());
url.searchParams.set('f', format);
url.searchParams.set('fit', 'cover');
return url.toString();
}The CDN caches transformed versions at edge locations globally. The first request for a specific size/format combination incurs a transformation cost, but subsequent requests are served from cache. This approach eliminated the need for a build-time image processing pipeline that would have added minutes to every deployment.
I configured the Next.js image loader to use the CDN transformation endpoint:
// next.config.js
module.exports = {
images: {
loader: 'custom',
loaderFile: './lib/cdn-image-loader.ts',
},
};
// lib/cdn-image-loader.ts
export default function cdnImageLoader({
src,
width,
quality,
}: {
src: string;
width: number;
quality?: number;
}): string {
return getCDNImageUrl(src, { width, quality: quality || 75 });
}Google Maps Performance
The map view was a separate performance problem. The initial implementation loaded the Google Maps JavaScript API synchronously, initialized a map with hundreds of markers, and attached hover listeners that loaded property preview images. On slower devices, this blocked the main thread for over a second.
I addressed this in three steps:
Async loading with dynamic import. Instead of loading the Maps API in the document head, I loaded it dynamically when the map view was activated:
function PropertyMap({ properties }: { properties: Property[] }) {
const [mapLoaded, setMapLoaded] = useState(false);
const mapRef = useRef<HTMLDivElement>(null);
useEffect(() => {
// Load Google Maps API only when map container is visible
const loader = new Loader({
apiKey: process.env.NEXT_PUBLIC_MAPS_KEY!,
version: 'weekly',
libraries: ['marker'],
});
loader.load().then(() => setMapLoaded(true));
}, []);
// Render map only after API loads
if (!mapLoaded) return <MapSkeleton />;
return <MapRenderer ref={mapRef} properties={properties} />;
}Marker clustering. Instead of rendering hundreds of individual markers, I used marker clustering to group nearby properties. This reduced the number of DOM elements from hundreds to dozens:
import { MarkerClusterer } from '@googlemaps/markerclusterer';
function initializeMarkers(map: google.maps.Map, properties: Property[]) {
const markers = properties.map((property) => {
const marker = new google.maps.marker.AdvancedMarkerElement({
position: { lat: property.lat, lng: property.lng },
map,
});
return marker;
});
new MarkerClusterer({ map, markers });
}Deferred preview images on markers. Property preview images on marker hover were loaded on demand rather than preloaded for all markers:
marker.addListener('mouseover', async () => {
if (!marker.previewLoaded) {
const preview = await loadPropertyPreview(property.id);
marker.previewLoaded = true;
showInfoWindow(map, marker, preview);
}
});Key Decisions & Trade-offs
CDN transformation vs. build-time processing. CDN transformation adds a cold-start latency on the first request for each image variant. Build-time processing eliminates this but adds significant deployment time and storage costs. With tens of thousands of images and multiple size variants each, the storage cost alone made build-time processing impractical. The CDN cold-start affects only the first viewer of each variant, and most popular listings warm the cache quickly.
AVIF vs. WebP only. AVIF encoding is slower than WebP on the CDN edge, which increases the cold-start penalty. However, the file size savings justified it — AVIF files were consistently smaller, which matters most for mobile users on cellular connections. I configured AVIF with a longer cache TTL to amortize the encoding cost.
Eager loading threshold. I chose to eagerly load the first six images (two rows of three on desktop). This is a heuristic — the actual above-the-fold count varies by viewport. A more precise approach would use the Intersection Observer API to determine visibility, but the fixed threshold was simpler and covered the common case well enough.
Gallery progressive loading vs. infinite scroll. I considered infinite scroll for the gallery but chose explicit "load more" behavior. Infinite scroll can cause layout thrashing and makes it harder for users to reach footer content. The explicit button gives users control over their bandwidth usage, which is especially appreciated on metered connections.
Results & Outcomes
The combined optimizations transformed the site's performance profile. LCP improved substantially across all page types — search results, property detail, and the map view all came in well under the 2.5-second threshold that Google considers "good." CLS dropped to near zero thanks to blur placeholders reserving image dimensions.
The SEO impact followed the performance improvements. After Google re-crawled the site with the updated Core Web Vitals, search rankings improved noticeably for competitive property-related keywords. The client reported increased organic traffic in the weeks following deployment.
User engagement metrics also shifted positively. Bounce rates on the search results page decreased, and users viewed more properties per session. The property detail page saw longer session durations, suggesting that the faster gallery encouraged more photo exploration.
Mobile performance saw the most dramatic improvement. Users on 4G connections went from an experience that felt broken — images loading in random order, layout shifting as they appeared — to a smooth, progressive reveal that matched the desktop experience in quality if not speed.
What I'd Do Differently
Implement image optimization at upload time, not retroactively. I had to backfill blur placeholders for tens of thousands of existing images. If the upload pipeline had generated blur data and validated image dimensions from the start, the optimization work would have been significantly simpler.
Use a dedicated image service from day one. Services like Cloudinary or Imgix handle responsive images, format negotiation, and CDN delivery out of the box. Building a custom CDN transformation pipeline was educational but unnecessary — the business value is in the rental platform, not the image infrastructure.
Profile real user data earlier. I initially optimized based on Lighthouse scores in a lab environment. Real User Monitoring (RUM) data from tools like web-vitals revealed different bottlenecks — the lab environment had fast network and CPU, masking issues that real users on mid-range Android devices experienced.
Investigate HTTP/2 server push or 103 Early Hints. For critical above-the-fold images, the browser does not start downloading until it parses the HTML and discovers the image tags. Early Hints could tell the browser about critical images before the HTML is even fully generated, shaving additional milliseconds off LCP.
FAQ
What is LCP and why does it matter for image-heavy sites?
Largest Contentful Paint measures how long it takes for the largest visible element (usually a hero image) to render. Google uses LCP as a Core Web Vital ranking factor. Image-heavy sites often have poor LCP because large unoptimized images block rendering.
How does CDN-based image transformation work?
Instead of storing multiple image sizes, you store the original and append transformation parameters to the URL (e.g., ?w=800&q=75&f=webp). The CDN generates and caches the transformed version on first request, delivering optimized images without a build-time processing pipeline.
What are blur placeholders and why use them?
LQIP (Low Quality Image Placeholders) are tiny blurred versions of images shown while the full image loads. They eliminate layout shift by reserving the correct dimensions and provide a better perceived loading experience than empty spaces or skeleton loaders.
Collaboration
Need help with a project?
Let's Build It
I help startups and established companies design, build, and scale world-class digital products. From deep technical architecture to pixel-perfect UI — let's bring your vision to life.
Related Articles
Optimizing Core Web Vitals for e-Commerce
Our journey to scoring 100 on Google PageSpeed Insights for a major Shopify-backed e-commerce platform.
Building an AI-Powered Interview Feedback System
How we built an AI-powered system that analyzes mock interview recordings and generates structured feedback on communication, technical accuracy, and problem-solving approach using LLMs.
Migrating from Pages to App Router
A detailed post-mortem on migrating a massive enterprise dashboard from Next.js Pages Router to the App Router.