Han ZhangCreative tech + Design research
gracehan2333@gmail.com


Nullam mollis, turpis nec varius scelerisque, est est ornare enim, sit amet facilisis ante metus posuere sapien. Curabitur nec sapien molestie, facilisis lacus eget, egestas nunc. Vivamus mattis metus placerat vehicula ultricies. Vivamus nec nisi lacus. Ut id egestas sem, sit amet cursus turpis. Sed sed congue nulla. Phasellus rhoncus tempor volutpat. Suspendisse interdum varius ante non fringilla. In et ligula lectus.




  • Experience
  • User researcher

  • Senior analyst

  • Head of user research

  • Media design practices
  • Bilibili

  • Kantar

  • OutIn

  • ArtCenter 
  • 2022 - 2024

  • 2024

  • 2025

  • 2025 - 2027

See my Resume here



The gaze panopticon



Can the gaze be turned into a collective power?



Year2024 Duration3 weeks RoleConcept, Design, Development
TechnologiesThree.js, Socket.io, face-api.js


Join the gaze here  
https://the-gaze-panopticon.onrender.com/audience.html

Try the feeling of being gazed
https://the-gaze-panopticon.onrender.com/index.html
#Interactive Installation
#WebGL
#Face Detection
#Real-time Systems







Overview

This interactive installation reimagines Foucault’s Panopticon through a radical reversal: the confined self transforms external surveillance into collective power. Audiences join via a shared web link (https://the-gaze-panopticon.onrender.com/audience.html) where real-time face detection tracks their attention. Each gaze is measured by duration and multiplied by the number of watchers, accumulating as tangible pressure on the confined self. As collective observation intensifies, the self channels this force to shatter the prison structure, taking us to a place where what was designed to control becomes the catalyst for liberation.


Experience Journey
One person (the Experiencer) navigates a 3D space where they're confined within a digital Panopticon alongside a representation of their "Self." Multiple Audience members watch through their screens, and their facial attention is tracked in real-time using computer vision.

As collective observation intensifies, pressure accumulates - the Panopticon structure begins to shake, the Self deforms under stress. When pressure reaches critical mass, the prison structure shatters, and the Self transforms into a luminous, free-floating form.


Design Goals
  • Create real-time networked interaction between 1 experiencer and multiple audience members
  • Use facial detection to quantify attention as measurable force
  • Visualize abstract concepts (pressure, surveillance) through 3D spatial experience
  • Build entirely web-based for accessibility—no app downloads required



Technical Architecture

Layer     Technology                  Purpose
3D Graphics     Three.js                Scene rendering, animation
Real-time Communication     Socket.io                Client-server state sync
Face Detection     face-api.js                Facial tracking
Backend     Node.js + Render                Server & state management
3D Modeling     Blender                Asset creation
 
The system consists of three main components communicating via WebSockets:
  • Experiencer Client: First-person 3D view with PointerLockControls for WASD movement
  • Node.js Server: Manages game state, tracks active gazers, calculates pressure accumulation, broadcasts updates
  • Audience Clients: Third-person 3D view with OrbitControls + face detection interface



Development

Phase 1 - Basic 3D modelingI created the geometry of the basic object (Self) and the scene (Panopticon) in Blender, then using shape keys to define its deformation states. Three.js then controls these morph targets in real-time based on accumulated gaze pressure.
Interpolation: 0.0 (base shape) 1.0 (fully deformed) 


function updateBlobMorph() {
   if (!morphTargets || morphTargets.length === 0) return;

   const normalizedPressure = Math.min(currentState.totalPressure / 100, 1.0);  
   morphTargets[0] = normalizedPressure;
}



Phase 2 - Importing the 3D model in Three.jsWith the Blender model exported as GLB, I needed to load it into Three.js and access the shape keys (called "morph targets" in Three.js). The challenge was to traverse the model's hierarchy, identify the correct meshes, and map pressure values to shape key influences in real-time.

Recognizing the model hierarchy GLB files can contain multiple meshes organized in a hierarchy. I needed to identify which mesh was the "Self" blob and which was the "Panopticon" structure. Usin
g model.traverse(), I checked each child's name and stored references to the relevant objects.

model.traverse((child) => {
   if (child.isMesh) {
       // Find the Self blob
       if (child.name.includes('Self')) {
           blob = child;
           blob.position.set(0, 1, 0);
           setupBlobMaterial(child);

           // Access morph targets (shape keys)
           if (child.morphTargetInfluences) {
               morphTargets = child.morphTargetInfluences;
               console.log('Morph targets found:', morphTargets.length);        
           }
       }

       // Find the Panopticon structure
       if (child.name.includes('Panopticon')) {
           panopticon = child;
           setupPanopticon(child);
       }
   }
});


Controlling Shape Keys with Pressure
To make the Self deform based on accumulated gaze pressure, I created an updateBlobMorph() function that maps pressure (0-100) to the shape key value (0.0-1.0).
At pressure = 0  morphTargets [0] = 0.0  blob stays in base form
At pressure = 50  morphTargets [0] = 0.5 blob is 50% deformed
At pressure = 100 morphTargets [0] = 1.0 blob is fully compressed
function updateBlobMorph() {
 if (!morphTargets || morphTargets.length === 0) return;
 
 const pressure = currentState.totalPressure;
 const normalizedPressure = Math.min(pressure / 100, 1.0);                     
 
 // Control the first shape key
 morphTargets[0] = normalizedPressure;
}


Phase 3 - Use face detection to collect the gazeThe most challenging aspect was implementing reliable face detection in the browser. This required three complete attempts before finding a working solution.

1st try - MediaPipe Face Mesh
The failed reason: Google's MediaPipe model promised 468 facial landmarks with high accuracy. However, it relied on WebAssembly (WASM) binaries that failed to initialize.
// Error encountered:
Uncaught TypeError: can't access property "buffer", HEAP8 is undefined         

// Root cause:
- WASM module failed to initialize
- SIMD compatibility issues across browsers
- Firefox had known bugs with MediaPipe's WASM
2nd try - TensorFlow.js Face Landmarks
I pivoted to TensorFlow.js, which uses pure JavaScript/WebGL instead of WASM. Model loaded successfully, camera feed worked; but detection consistently returned 0 faces.
const detectorConfig = {
 runtime: 'tfjs',
 maxFaces: 1,
 refineLandmarks: false,
 detectionConfidence: 0.1 // Lowered to minimum.                               
};

const faces = await detector.estimateFaces(video);
console.log(faces.length); // Output: 0 (always)
3rd try - face-api.js Finally, I found success with the vladmandic fork of face-api.js. It’s a more stable, better-maintained version with simpler API.
await faceapi.nets.tinyFaceDetector.loadFromUri(MODEL_URL);
await faceapi.nets.faceLandmark68Net.loadFromUri(MODEL_URL);

const detection = await faceapi
    .detectSingleFace(video, new faceapi.TinyFaceDetectorOptions())              
    .withFaceLandmarks();

if (detection) {
    const landmarks = detection.landmarks.positions;
    const nose = landmarks[30];
    const leftEye = landmarks[36];
    const rightEye = landmarks[45];

    // Calculate gaze orientation
    const eyeCenterX = (leftEye.x + rightEye.x) / 2;
    const offsetX = nose.x - eyeCenterX;

    const normalizedX = (offsetX / eyeDistance) * 100;
    const isFacing = Math.abs(normalizedX) < 20;
}



Phase 4 - Building the Real-Time Communication SystemTo synchronize state across multiple users, I built a Node.js server using Socket.io. The server maintains a centralized gameState and broadcasts updates to all connected clients.

Server state management // server/server.js
const gameState = {
    watchers: 0,              // Total audience members
    activeGazers: new Set(),  // Currently gazing users
    totalPressure: 0,         // Accumulated pressure (0-100+)
    phase: 'waiting'          // Current phase
};

io.on('connection', (socket) => {
    socket.on('join-as', (role) => {
        socket.role = role;

        if (role === 'audience') {
            gameState.watchers++;
        }

        socket.emit('initial-state', gameState);
        io.emit('state-update', gameState);
    });
 
    socket.on('disconnect', () => {
        gameState.activeGazers.delete(socket.id);
        if (socket.role === 'audience') {
            gameState.watchers = Math.max(0, gameState.watchers - 1);          
        }
        io.emit('state-update', gameState);
    });
});


Gaze Event HandlingFace detection on the client emits three events: gaze-start, gaze-hold (every 100ms), and gaze-end. The server accumulates pressure accordingly.
socket.on('gaze-start', () => {
    gameState.activeGazers.add(socket.id);
    gameState.totalPressure += 0.5;
    updatePhase();
    broadcastState();
});

socket.on('gaze-hold', () => {
    if (gameState.activeGazers.has(socket.id)) {
        gameState.totalPressure += 0.15;  // 1.5 pressure/sec                  
        updatePhase();
        broadcastState();
    }
});

socket.on('gaze-end', () => {
    gameState.activeGazers.delete(socket.id);
});

Timing: 1 person gazing continuously = ~2 pressure/sec → 50 seconds to reach rupture (100)

Phase Transitions function updatePhase() {  
    const p = gameState.totalPressure;

    if (p < 30) gameState.phase = 'waiting';
    else if (p < 70) gameState.phase = 'critical';
    else if (p < 100) gameState.phase = 'rupture';                              
    // 'transmutation' triggered manually
}

function broadcastState() {
    clearTimeout(updateTimeout);
    updateTimeout = setTimeout(() => {
        io.emit('state-update', gameState);
    }, 50);  // Debounced to max 20 updates/sec
}


Manual ControlsThe experiencer can manually trigger transmutation and reset the experience.
// Trigger transmutation (only available in rupture phase)                      
socket.on('trigger-transmutation', () => {
    gameState.phase = 'transmutation';
    broadcastState();
});

// Reset experience
socket.on('reset-experience', () => {
    gameState.totalPressure = 0;
    gameState.phase = 'waiting';
    gameState.activeGazers.clear();
    broadcastState();
});






©2026 by Han <3