Building bydom.io: Particle Portraits with Three.js and React Three Fiber
Why a particle portrait
The hero section of a portfolio site is the first thing a visitor sees. Most developer portfolios open with a text block or a static illustration. I wanted something that communicates craft immediately, without requiring a scroll or a click. A particle portrait, where a photograph dissolves into thousands of interactive points, does that. It says: this person builds things that are technically interesting and visually considered.
The concept is straightforward. Take a photograph, sample pixel data to extract positions and colors, render each sample as a particle in a Three.js scene, and make the whole thing interactive. The implementation is where it gets interesting.
The approach: photo to particles
The pipeline starts with a source photograph and the Canvas API. Loading the image onto an offscreen canvas gives access to raw pixel data through getImageData. From there, sampling every Nth pixel (based on desired particle count) produces an array of positions and colors.
// Extract particle positions from a source image
function extractParticleData(
image: HTMLImageElement,
targetCount: number
): ParticleData {
const canvas = document.createElement("canvas");
const ctx = canvas.getContext("2d")!;
canvas.width = image.width;
canvas.height = image.height;
ctx.drawImage(image, 0, 0);
const imageData = ctx.getImageData(0, 0, canvas.width, canvas.height);
const pixels = imageData.data;
// Calculate sampling step to hit target particle count
const totalPixels = canvas.width * canvas.height;
const step = Math.max(1, Math.floor(Math.sqrt(totalPixels / targetCount)));
const positions: Float32Array = new Float32Array(targetCount * 3);
const colors: Float32Array = new Float32Array(targetCount * 3);
let index = 0;
for (let y = 0; y < canvas.height; y += step) {
for (let x = 0; x < canvas.width; x += step) {
if (index >= targetCount) break;
const i = (y * canvas.width + x) * 4;
// Skip near-black pixels (background)
const brightness = (pixels[i] + pixels[i + 1] + pixels[i + 2]) / 3;
if (brightness < 20) continue;
// Map pixel position to 3D space (centered at origin)
positions[index * 3] = (x / canvas.width - 0.5) * 10;
positions[index * 3 + 1] = -(y / canvas.height - 0.5) * 10;
positions[index * 3 + 2] = 0;
// Normalize RGB to 0-1
colors[index * 3] = pixels[i] / 255;
colors[index * 3 + 1] = pixels[i + 1] / 255;
colors[index * 3 + 2] = pixels[i + 2] / 255;
index++;
}
}
return { positions, colors, count: index };
}
The brightness filter is important. Without it, background pixels produce particles that add noise without contributing to the portrait. Filtering below a threshold keeps only the meaningful pixels and produces a cleaner silhouette.
React Three Fiber in Astro
Astro’s island architecture is perfect for this use case. The homepage is static HTML and CSS, fast to load and easy to cache. The particle hero is a React component that hydrates with client:load, bringing Three.js into the browser only for the section that needs it.
The R3F setup uses a Points geometry with custom buffer attributes for positions and colors. A custom shader material handles point sizing, color application, and the mouse interaction effect.
// ParticleHero React island (simplified)
import { Canvas, useFrame, useThree } from "@react-three/fiber";
import { useMemo, useRef } from "react";
import * as THREE from "three";
function ParticleField({ data }: { data: ParticleData }) {
const pointsRef = useRef<THREE.Points>(null);
const { mouse } = useThree();
const geometry = useMemo(() => {
const geo = new THREE.BufferGeometry();
geo.setAttribute("position",
new THREE.BufferAttribute(data.positions, 3));
geo.setAttribute("color",
new THREE.BufferAttribute(data.colors, 3));
return geo;
}, [data]);
useFrame(() => {
if (!pointsRef.current) return;
const positions = pointsRef.current.geometry.attributes.position;
// Apply mouse repulsion to nearby particles
for (let i = 0; i < data.count; i++) {
const dx = positions.getX(i) - mouse.x * 5;
const dy = positions.getY(i) - mouse.y * 5;
const dist = Math.sqrt(dx * dx + dy * dy);
if (dist < 1.5) {
const force = (1.5 - dist) * 0.02;
positions.setX(i, positions.getX(i) + dx * force);
positions.setY(i, positions.getY(i) + dy * force);
}
}
positions.needsUpdate = true;
});
return (
<points ref={pointsRef} geometry={geometry}>
<pointsMaterial
size={0.03}
vertexColors
transparent
opacity={0.9}
sizeAttenuation
/>
</points>
);
}
The key detail: the particle interaction runs on every frame using useFrame. For 25,000 particles, this means 25,000 distance calculations per frame. That brings us to the performance challenge.
Mouse interaction: raycasting and repulsion
The mouse interaction uses a simple repulsion model rather than true raycasting. Raycasting against 25,000 individual points is expensive and unnecessary. Instead, the mouse position (normalized to scene coordinates) creates a spherical repulsion field. Particles within the radius get pushed away proportionally to their distance from the cursor.
The visual effect is satisfying: moving your cursor over the portrait disperses particles like disturbing the surface of water, and they drift back to their original positions when the cursor moves away. The return-to-origin behavior uses linear interpolation toward the initial positions stored in a separate buffer.
Performance: GPU tiering
Not every device can handle 25,000 animated particles at 60fps. The solution is GPU tiering, detecting the device’s rendering capability and adjusting the particle count accordingly.
// Device tier detection for particle count
function getDeviceTier(): "high" | "mid" | "low" {
const canvas = document.createElement("canvas");
const gl = canvas.getContext("webgl2") || canvas.getContext("webgl");
if (!gl) return "low";
const debugInfo = gl.getExtension("WEBGL_debug_renderer_info");
const renderer = debugInfo
? gl.getParameter(debugInfo.UNMASKED_RENDERER_WEBGL)
: "";
// Check for known high-performance GPUs
if (/RTX|RX 7|M[1-4] (Pro|Max|Ultra)|Apple GPU/i.test(renderer)) {
return "high";
}
// Check for integrated/low-power GPUs
if (/Intel|Mali|Adreno [0-5]/i.test(renderer)) {
return "low";
}
return "mid";
}
const PARTICLE_COUNTS = {
high: 25000,
mid: 12000,
low: 5000,
} as const;
High-tier devices (dedicated GPUs, Apple Silicon) get the full 25,000 particles. Mid-tier (recent integrated graphics) get 12,000. Low-tier (older mobile, basic integrated) get 5,000. The visual impact scales gracefully. Even 5,000 particles produce a recognizable portrait with satisfying interactions, just at lower density.
Scroll-triggered scatter
The second animation layer is scroll-driven. As the user scrolls past the hero section, the particles scatter outward in a controlled explosion, transitioning from the portrait to an abstract particle field. This uses ScrollTrigger from GSAP to map scroll position to a scatter intensity value.
The scatter animation lerps each particle from its portrait position toward a randomized target position on a sphere. The scroll progress (0 to 1) controls how far along that interpolation each particle travels. At scroll position 0, the portrait is intact. At scroll position 1, the particles form an abstract cloud.
The combination of mouse interaction and scroll animation creates two distinct modes of engagement. Hovering is playful exploration. Scrolling is a controlled transition that guides the visitor from the hero into the content below.
What is next
The particle portrait is the foundation for a series of visual experiments. Version 1.1 will add kinetic typography, where the bold hero text (“I build things that make you look twice.”) renders as particles that assemble on load. Version 1.2 will introduce mesh morphing, where the particle field transitions between different 3D shapes as the user scrolls through sections.
The goal is not complexity for its own sake. Each visual layer should reinforce the site’s message: this is someone who builds things with craft and intention. If a visual effect does not serve that message, it does not ship.