Small Things at Scale: Layers Panel Performance
Context
UXPin is a design tool for interactive prototypes - the kind of product where rendering performance is the user experience. The culture was good, the people were sharp, and I learned a lot from them.
The Layers panel shows the full tree of every element in a design, supports drag-and-drop reordering, and updates in real time during collaboration. It’s one of the most-used surfaces in the product. I’d been doing regular mid-level work across the codebase, and the performance problem on this panel was where my CSS and DOM background became relevant.
Challenge
The product owner brought it to the team directly - users were unhappy with performance on large designs. Designs with thousands of layers, each one a node in a tree that needed to respond to drag-and-drop, scrolling, and real-time updates from other collaborators. The canvas itself was already heavy, and the Layers panel added to it.
The hardware landscape made it worse. Not everyone was running the latest machine, and a web-based design tool doesn’t get the same rendering budget as a native app. Users expected native-app fluidity. The gap between what they expected and what we delivered on large designs was the problem.
Approach
The work split naturally between me and another developer. He took JavaScript improvements - state management, update batching, the logic side. I took CSS and the DOM. It wasn’t assigned that way; it just made sense given what each of us knew. I shared what I’d call rules about DOM structure with him - not directives, just shared context so our changes wouldn’t work against each other.
The starting point was always the browser, not the code. One thing is what the framework generates, another is what actually ends up in the HTML. Chrome DevTools, the rendered DOM, working from what the browser was actually dealing with - not what the React components said they were producing.
The methodology was simple and repetitive: make a change, benchmark in DevTools, analyze what moved, repeat. Research meant MDN and W3C docs - understanding what the browser actually does with each CSS property, not guessing.
The optimizations weren’t dramatic individually. Minimizing DOM elements. Reducing unnecessary tiles. Choosing CSS properties that were cheaper to compute. The one that stuck with me was pseudo-elements. A pseudo-element on a single list item costs nothing. On every one of thousands of list items being repositioned during a drag operation, with real-time updates from collaborators - it compounds into a real problem. That pattern repeated across the work: something invisible at the individual level becoming obvious at scale.
The other developer shipped his JS improvements first. Then my CSS and DOM work cut another roughly 30% from the already-improved solution. The whole thing took about one to two weeks of focused work. There was no single breakthrough moment - just steady, incremental progress from profiling and research.
Outcome
The combined improvement was 30-40% in frame time. More important than the number: users had been actively complaining about this, and the fix addressed a pain point they’d been vocal about. When performance work shows up in user feedback without anyone pointing it out, it crossed a threshold that matters.
Reflection
The useful thing from this work isn’t a specific technique. It’s that scale changes what matters. CSS that’s perfectly fine on a marketing page becomes a bottleneck in a design tool rendering thousands of interactive elements. You can’t reason about performance at scale from how things behave at small scale.
The other part is about collaboration. Neither the JS improvements nor the CSS/DOM work would have been as effective alone. We divided the work because each of us could see things the other would miss, and sharing context about constraints kept our changes from conflicting. No heroics, just two people covering more ground together than either would alone.