I’ve got some blind spots in CSS-related performance things. One example is the will-change
property. It’s a good name. You’re telling the browser some particular property (or the scroll-position
or content) uh, will, change:
.el {
will-change: opacity;
}
.el.additional-hard-to-know-state {
opacity: 0;
}
But is that important to do? I don’t know. The point, as I understand it, is that it will kick .el
into processing/rendering/painting on the GPU rather than CPU, which is a speed boost. Sort of like the classic transform: translate3d(0, 0, 0);
hack. In the exact case above, it doesn’t seem to my brain like it would matter. I have in my head that opacity
is one of the “cheapest” things to animate, so there is no particular benefit to will-change
. Or maybe it matters noticeably on some browsers or devices, but not others? This is front-end development after all.
There was a spurt of articles about will-change
around 2014/2015 that warn about weird behavior, like unexpected changes in stacking contexts and being careful not to use it “too much.” There was also advice spreading around that you should never use this property directly in CSS stylesheets; you should only apply it in JavaScript before the state change, then remove it after you no longer need it.
I have no idea if any of those things are still true. Sorry! I’d love to read a 2022 deep dive on will-change
. We’re capable of that kind of testing, so I’ll put it in the idea pile. But my point is that there are things in CSS that are designed explicitly for performance that are confusing to me, and I wish I had a more full understanding of them because they seem like Very Big Deals.
Take “How I made Google’s data grid scroll 10x faster with one line of CSS” by Johan Isaksson. A 10✕ scrolling performance improvement is a massive deal! Know how they fixed it?
[…] as I was browsing the “Top linking sites” page I noticed major scroll lag. This happens when choosing to display a larger dataset (500 rows) instead of the default 10 results.
[…]
So, what did I do? I simply added a single line of CSS to the<table>
on theElements
panel, specifying that it will not affect the layout or style of other elements on the page
table {
contain: strict;
}
The contain
property is another that I sort of get, but I’d still call it a blind spot because my brain doesn’t just automatically think of when I could (or should?) use it. But that’s a bummer, because clearly I’m not building interfaces as performant as I could be if I did understand contain
better.
There’s another! The content-visibility
property. The closest I came to understanding it was after watching Jake and Surma’s video on it where they used it (along with contain-intrinsic-size
and some odd magic numbers) to dramatically speed up a long page. What hasn’t stuck with me is when I should use it on my pages.
Are all three of these features “there if you need them” features? Is it OK to ignore them until you notice poor performance on something (like a massive page) and then reach for them to attempt to solve it? Almost “don’t use these until you need them,” otherwise you’re in premature optimization territory. The trouble with that is the classic situation where you won’t actually notice the poor performance unless you are very actively testing on the lowest-specced devices out there.
Or are these features “this is what modern CSS is and you should be thinking of them like you think of padding
” territory? I kind of suspect it’s more like that. If you’re building an element you know won’t change in certain ways, it’s probably worth “containing” it. If you’re building an element you know will change in certain ways, it’s probably worth providing that info to browsers. If you’re building a part of page you know is always below the fold, it’s probably worth avoiding the paint on it. But personally, I just don’t have enough of this fully grokked to offer any solid advice.
Maybe the DevTools performance or Lighthouse reports could suggest these properties? The browser has a better understanding of the DOM and render trees than the developer, so it can better identify which element is causing a performance issue and what kind of issue, and then suggest the appropriate CSS feature to mitigate the issue.
I think you are on the wrong track if you think about
will-change
in terms of CPU/GPU. A better way to understand it is to realize that a rendered web page can be divided up into several layers. Each layer is painted indenpently into a static image (which needs to be repainted as a whole when it changes), and then the images are compositioned, one atop the other.What makes the performance differences are three questions:
How many layers are there? The more layers you have, the more composition operations are needed. But if there aren’t enough, changes may happen inside one layer, and it needs to be re-painted more often.
What dimensions does each layer have? The smaller the layer, the less pixels have to be computed during composition.
Which composition algorithm has to be used? If the composition operation is
over
, and the upper layer is opaque, the old pixels are just overwritten with the new, no computation needed, but if there is some transparency involved, or even a blending algorithm, each pixel is the result of a computation cycle.Browsers figure out the optimal division of a page into layers for themselves. The properties you named help to figure them out. If setting them helps in answering the above questions, give them a try.
I learned about this from a talk by Martin Splitt of Google, given in 2017 in German and English.
Chris, I’m a long time reader of your blog, and I wanted to say that I really really like your humble, honest way of writing about things you haven’t fully understood, yet. Thank you!!
Yes. It’s the golden rule of optimization: profile, profile, profile. Then use these props to solve the issues you see in the profile. For example, if you see a lot of Recalculate Style time in your profile when there’s a lot of elements on screen, use content: strict.
You don’t necessarily have to test on a slow device, since the profile can reveal if the relative timing proportions look wrong, even if the absolute processing time is low.
I loved ccprog’s comment. The idea of different layers or segments having different degrees of change and how we can provide hints to the underlying mechanisms (for rendering, painting, overwriting, etc) as an assist to optimal performance is interesting. As an old timer, it throws me back to when we explicitly divided our software into logical segments and defined an “overlay map” so that the OS would know which parts needed to be in memory at the same time for best performance and which could be swapped out when done with, all of which became obsolete when Virtual Mem/Paging systems came into being– keeping track of which pages were frequently being hit and needed to kept in real memory for performance. Perhaps will-change and other pragmas that we use today will similarly fall by the wayside with future advances.
Relayouts and repaints may happen even with
will-change
. It seems the browser cannot guarantee anything, as many other parameters can prevent optimizations.I consider
will-change
more as a suggestion for browsers than an actual instruction.I think of
will-change
andcontains
in two separate categories:*
will-change
may cause the browser to do extra work that it might not have done on its own*
contains
tells the browser it doesn’t need to do some of the work (paint or layout) it would normally doSo, to my mind,
contains
is safer… but yes, premature optimization should be avoided.Composition is generally faster than repainting. Painting is destructive, whereas compositing is not. Before Photoshop had editable type, you would enter the text you wanted and it was rasterized into the current layer. If you wanted to move it to a different position or scale it up or change its opacity, you’d have to paint over the rasterized text and use the text tool all over again. Or, you could use the text tool on a transparent layer and be able to reposition it without having to erase it from the background.
These days, browsers are pretty good about creating compositor layers automatically when they need to (such as for
opacity
andtransform
). But if that happens suddenly, say in response to a user event, there’s a good deal of work that needs to be done: repaint the background without the element in question, allocate a new compositor layer and paint the element onto that, move both painted layers to the GPU, then composite them. Doing all of this at the start of a transition can cause a bit of jank for the the first few frames while these pixels are being shuffled around. That’s wherewill-change
comes in. It let’s you suggest that the browser should do these steps now in anticipation of having to later animate, transition, or otherwise change a property that the compositor can handle easily (mostlyopacity
andtransform
). Thus,will-change
is most advantageous when you can suggest that the browser prepare for this before the moment you need to change the compositing properties. Otherwise, it will handle it just-in-time anyway.As ccprog points out, however, you don’t want to add
will-change
unnecessarily. Creating too many compositor layers takes up considerable video memory and there is overhead to pushing paint changes on those layers to the graphics card. It’s not a panacea or a guaranteed performance boost. It has the potential to negatively affect performance.In addition to profiling, using the Paint Flashing and Layer Borders rendering options in dev tools can be instructive when trying to figure out what the browser is doing or how
will-change
andcontains
can affect performance.