Han ZhangCreative tech + Design research
gracehan2333@gmail.com
I'm a designer, researcher, and creative technologist based in Los Angeles California. Born in Guangdong, shaped by three years of UX research and product strategy in Shanghai, now pursuing MFA in Media Design Practices at ArtCenter in Pasadena.
I build systems exploring how culture and technology shape cognition; like...how pop culture rewires expectations, how surveillance changes behavior, how interfaces train us to think. My work asks: how do the tools we design reshape the way we perceive and interact?
- User researcher
-
-
Senior analyst
-
- Head of user research
-
- Media design practices
- Bilibili
-
-
Kantar
-
-
OutIn
-
-
ArtCenter
- 2022 - 2024
-
-
2024
-
-
2025
-
-
2025 - 2027
See my CV here
WorksExperiments
Semantic Collapse
Where is the visual boundary for texts
between a satisfying rhythm and a stressful noise?
YearDec 2025RoleResearch, Design, DevelopmentTechnologiesSocket.io, Plotly.js, Node.js
Try the demohttps://alternative-machine.onrender.com/#Data Visualization
#Typography
#Real-time Systems
#Critical Design
Overview
As an interactive web experiment for mapping the point where typographic order collapses into visual discomfort, this project explores semantic satiation - a psychological phenomenon where repetition transforms meaning into noise.
Everyone can input some words, then adjust some parameters of the typographic orders to choose the most uncomfortable level for self. The web will show a real-time visualization of accumulated datas, recording a collective map of semantic satiation fear.
Inspiration - Fear of repetition of words
Semantic Satiation & the uncanny: When a word is repeated too many times, even the most familiar text undergoes a transformation: it not only dissolves into meaninglessness but also evokes a sense of the uncanny, becoming strange, alien, and even visually disturbing.
The famous scene in The Shining - showing the semantic satiation
Design Goal
- Create real-time typographic visualization system
-
Map three linguistic dimensions (repetition, density, distortion) to 3D space
-
Enable collective data contribution through web interface
-
Build entirely browser-based for accessibility
Concept Thinking
Q1: Should I make a predicting machine or just let it collect data?
A1: I originally wanted to make this machine can predict people’s reactions. But after talking with Miller (one of my instructors), he recommended me to collect datas and do some visualizations, because different people may feel very differently, it’s actually a very subjective thing.
Q2: Which parameters should I choose?
A2:
Thinking about the user experience and the final methods of data visualization, I want to control the amount of parameters less than 5. After careful consideration, this is the chosen parameters:
- Repetition: the volume of texts
- It represents “information overload”
- The quantity that transforms meaning into noise
- Range determined by screen capacity testing
-
Density: the compression level of texts
- Controls letter-spacing and line-height
- simulates claustrophobia and visual crowding
- Percentage-based for intuitive understanding
- Distortion: the chaos level of texts
- Character-level transformation (skew, scale, displacement)
- Represents loss of readability and visual stability
- Mimics psychological disorientation
Technical solutions
Development
Phase 1 - Thinking about how to visualize datasMy visualization underwent three iterations. Initially, I considered a grid wall, displaying all samples in a flat grid, but it proved too flat to discern any patterns. Next, I experimented with a two-dimensional scatter plot, using density and distortion for the X/Y axes, only to find it incapable of representing the third parameter (number of repetitions). Finally, I settled on a 3D topographic map—treating the number of repetitions as "elevation". The plains represent parameters most people find comfortable, while the peaks mark critical thresholds. Through this metaphor, I aim to convey that what you see in the visualization isn't merely data points, but a collective landscape of human tolerance limits.
Phase 2 - Thinking about how to visualize datasIn this project, I believe real-time synchronization is paramount. When someone presses 'record', everyone's screen must update simultaneously, immediately displaying how that sample impacts the collective data. I resolved this using Socket.io's WebSocket connections.
The final workflow is thus: when a user adjusts parameters and clicks record, the server immediately performs sentiment analysis on the input text and broadcasts the results to all online clients. Everyone's visualization synchronises instantly. For the 3D terrain, I selected Plotly.js for rendering due to its robust WebGL support; finally, Render was employed for server deployment and automatic scaling.
Phase 3 - How to achieve sentiment analysis?At this stage, I suddenly felt compelled to endow this machine with certain user research attributes. In my previous user research work, we required foundational user information—such as gender, age, and cultural background—to better correlate with their behaviours and perspectives, thereby drawing conclusions about which demographic groups exhibit specific characteristics. In this project, I wanted the machine to interpret users without direct questioning. Why not treat the text they voluntarily input as part of their baseline information? Logically, this is sound, as self-selected input inherently reflects personal agency.
Thus, I decided to incorporate sentiment analysis into the input vocabulary to examine whether emotional states correlate with tolerance for visual disorganisation. For instance, do individuals inputting angry vocabulary exhibit greater susceptibility to breakdown?
To implement this functionality, I incorporated the sentiment library, scoring texts on a scale from -5 (extremely negative) to +5 (extremely positive). Analysis results can also be viewed within individual samples on the webpage.
Phase 4 - Coding, testing and having feedbacks
DEMO LINK
https://alternative-machine.onrender.com/I asked some of my classmates to test this demo. Although the sample amount is still not enough, I have some interesting observations.
-
Most people recorded the point at a high level distortion, including myself. However, one of my classmates Henry thought when every character is perfectly neat (distortion = 0%) , that is when it is most terrifying.
-
Many people said the interface looks terrible on mobile (fair, since I didn't have time for responsive design at all), so they wanted to wait and do it on their computer later. But for me, any delay in sample collection kills the response rate. This made me realize how critical mobile adaptation is for web-based interactive prototypes, even when the project itself isn't primarily targeting mobile users.
Different typographic effects
Next step
- Convert current relative axes to absolute coordinate values
-
Adjust visualization layout to prevent 3D canvas from blocking the sample detail panel
-
Continue data collection for research credibility—target 50-100 samples
-
Implement responsive design for mobile
-
Consider adding serif/sans-serif toggle to typography controls
References
Socket.io Documentation
Plotly.js Documentation
Semantic Satiation (Wikipedia)