Passkeys: What the Heck and Why? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]>Geeky OS security enhancements don’t exactly make big headlines in the front-end community, but it stands to reason that passkeys are going to be a “thing”. And considering how passwords and password apps affect the user experience of things like authentication and form processing, we might want to at least wrap our minds around them, so we know what’s coming.
That’s the point of this article. I’ve been studying and experimenting with passkeys — and the WebAuthn API they are built on top of — for some time now. Let me share what I’ve learned.
Here’s the obligatory section of the terminology you’re going to want to know as we dig in. Like most tech, passkeys are wrought with esoteric verbiage and acronyms that are often roadblocks to understanding. I’ll try to de-mystify several for you here.
Before we can talk specifically about passkeys, we need to talk about another protocol called WebAuthn (also known as FIDO2). Passkeys are a specification that is built on top of WebAuthn. WebAuthn allows for public key cryptography to replace passwords. We use some sort of security device, such as a hardware key or Trusted Platform Module (TPM), to create private and public keys.
The public key is for anyone to use. The private key, however, cannot be removed from the device that generated it. This was one of the issues with WebAuthn; if you lose the device, you lose access.
Passkeys solves this by providing a cloud sync of your credentials. In other words, what you generate on your computer can now also be used on your phone (though confusingly, there are single-device credentials too).
Currently, at the time of writing, only iOS, macOS, and Android provide full support for cloud-synced passkeys, and even then, they are limited by the browser being used. Google and Apple provide an interface for syncing via their Google Password Manager and Apple iCloud Keychain services, respectively.
In public key cryptography, you can perform what is known as signing. Signing takes a piece of data and then runs it through a signing algorithm with the private key, where it can then be verified with the public key.
Anyone can generate a public key pair, and it’s not attributable to any person since any person could have generated it in the first place. What makes it useful is that only data signed with the private key can be verified with the public key. That’s the portion that replaces a password — a server stores the public key, and we sign in by verifying that we have the other half (e.g. private key), by signing a random challenge.
As an added benefit, since we’re storing the user’s public keys within a database, there is no longer concern with password breaches affecting millions of users. This reduces phishing, breaches, and a slew of other security issues that our password-dependent world currently faces. If a database is breached, all that’s stored in the user’s public keys, making it virtually useless to an attacker.
No more forgotten emails and their associated passwords, either! The browser will remember which credentials you used for which website — all you need to do is make a couple of clicks, and you’re logged in. You can provide a secondary means of verification to use the passkey, such as biometrics or a pin, but those are still much faster than the passwords of yesteryear.
Public key cryptography involves having a private and a public key (known as a key pair). The keys are generated together and have separate uses. For example, the private key is intended to be kept secret, and the public key is intended for whomever you want to exchange messages with.
When it comes to encrypting and decrypting a message, the recipient’s public key is used to encrypt a message so that only the recipient’s private key can decrypt the message. In security parlance, this is known as “providing confidentiality”. However, this doesn’t provide proof that the sender is who they say they are, as anyone can potentially use a public key to send someone an encrypted message.
There are cases where we need to verify that a message did indeed come from its sender. In these cases, we use signing and signature verification to ensure that the sender is who they say they are (also known as authenticity). In public key (also called asymmetric) cryptography, this is generally done by signing the hash of a message, so that only the public key can correctly verify it. The hash and the sender’s private key produce a signature after running it through an algorithm, and then anyone can verify the message came from the sender with the sender’s public key.
To access passkeys, we first need to generate and store them somewhere. Some of this functionality can be provided with an authenticator. An authenticator is any hardware or software-backed device that provides the ability for cryptographic key generation. Think of those one-time passwords you get from Google Authenticator, 1Password, or LastPass, among others.
For example, a software authenticator can use the Trusted Platform Module (TPM) or secure enclave of a device to create credentials. The credentials can be then stored remotely and synced across devices e.g. passkeys. A hardware authenticator would be something like a YubiKey, which can generate and store keys on the device itself.
To access the authenticator, the browser needs to have access to hardware, and for that, we need an interface. The interface we use here is the Client to Authenticator Protocol (CTAP). It allows access to different authenticators over different mechanisms. For example, we can access an authenticator over NFC, USB, and Bluetooth by utilizing CTAP.
One of the more interesting ways to use passkeys is by connecting your phone over Bluetooth to another device that might not support passkeys. When the devices are paired over Bluetooth, I can log into the browser on my computer using my phone as an intermediary!
Passkeys and WebAuthn keys differ in several ways. First, passkeys are considered multi-device credentials and can be synced across devices. By contrast, WebAuthn keys are single-device credentials — a fancy way of saying you’re bound to one device for verification.
Second, to authenticate to a server, WebAuthn keys need to provide the user handle for login, after which an allowCredentials
list is returned to the client from the server, which informs what credentials can be used to log in. Passkeys skip this step and use the server’s domain name to show which keys are already bound to that site. You’re able to select the passkey that is associated with that server, as it’s already known by your system.
Otherwise, the keys are cryptographically the same; they only differ in how they’re stored and what information they use to start the login process.
The process for generating a WebAuthn or a passkey is very similar: get a challenge from the server and then use the navigator.credentials.create
web API to generate a public key pair. Then, send the challenge and the public key back to the server to be stored.
Upon receiving the public key and challenge, the server validates the challenge and the session from which it was created. If that checks out, the public key is stored, as well as any other relevant information like the user identifier or attestation data, in the database.
The user has one more step — retrieve another challenge from the server and use the navigator.credentials.get
API to sign the challenge. We send back the signed challenge to the server, and the server verifies the challenge, then logs us in if the signature passes.
There is, of course, quite a bit more to each step. But that is generally how we’d log into a website using WebAuthn or passkeys.
Passkeys are used in two distinct phases: the attestation and assertion phases.
The attestation phase can also be thought of as the registration phase. You’d sign up with an email and password for a new website, however, in this case, we’d be using our passkey.
The assertion phase is similar to how you’d log in to a website after signing up.
The navigator.credentials.create
API is the focus of our attestation phase. We’re registered as a new user in the system and need to generate a new public key pair. However, we need to specify what kind of key pair we want to generate. That means we need to provide options to navigator.credentials.create
.
// The `challenge` is random and has to come from the server
const publicKey: PublicKeyCredentialCreationOptions = {
challenge: safeEncode(challenge),
rp: {
id: window.location.host,
name: document.title,
},
user: {
id: new TextEncoder().encode(crypto.randomUUID()), // Why not make it random?
name: 'Your username',
displayName: 'Display name in browser',
},
pubKeyCredParams: [
{
type: 'public-key',
alg: -7, // ES256
},
{
type: 'public-key',
alg: -256, // RS256
},
],
authenticatorSelection: {
userVerification: 'preferred', // Do you want to use biometrics or a pin?
residentKey: 'required', // Create a resident key e.g. passkey
},
attestation: 'indirect', // indirect, direct, or none
timeout: 60_000,
};
const pubKeyCredential: PublicKeyCredential = await navigator.credentials.create({
publicKey
});
const {
id // the key id a.k.a. kid
} = pubKeyCredential;
const pubKey = pubKeyCredential.response.getPublicKey();
const { clientDataJSON, attestationObject } = pubKeyCredential.response;
const { type, challenge, origin } = JSON.parse(new TextDecoder().decode(clientDataJSON));
// Send data off to the server for registration
We’ll get PublicKeyCredential
which contains an AuthenticatorAttestationResponse
that comes back after creation. The credential has the generated key pair’s ID.
The response provides a couple of bits of useful information. First, we have our public key in this response, and we need to send that to the server to be stored. Second, we also get back the clientDataJSON
property which we can decode, and from there, get back the type
, challenge
, and origin
of the passkey.
For attestation, we want to validate the type
, challenge
, and origin
on the server, as well as store the public key with its identifier, e.g. kid. We can also optionally store the attestationObject
if we wish. Another useful property to store is the COSE algorithm, which is defined above in our PublicKeyCredentialCreationOptions
with alg: -7
or alg: -256
, in order to easily verify any signed challenges in the assertion phase.
The navigator.credentials.get
API will be the focus of the assertion phase. Conceptually, this would be where the user logs in to the web application after signing up.
// The `challenge` is random and has to come from the server
const publicKey: PublicKeyCredentialRequestOptions = {
challenge: new TextEncoder().encode(challenge),
rpId: window.location.host,
timeout: 60_000,
};
const publicKeyCredential: PublicKeyCredential = await navigator.credentials.get({
publicKey,
mediation: 'optional',
});
const {
id // the key id, aka kid
} = pubKeyCredential;
const { clientDataJSON, attestationObject, signature, userHandle } = pubKeyCredential.response;
const { type, challenge, origin } = JSON.parse(new TextDecoder().decode(clientDataJSON));
// Send data off to the server for verification
We’ll again get a PublicKeyCredential
with an AuthenticatorAssertionResponse
this time. The credential again includes the key identifier.
We also get the type
, challenge
, and origin
from the clientDataJSON
again. The signature
is now included in the response, as well as the authenticatorData
. We’ll need those and the clientDataJSON
to verify that this was signed with the private key.
The authenticatorData
includes some properties that are worth tracking First is the SHA256 hash of the origin you’re using, located within the first 32 bytes, which is useful for verifying that request comes from the same origin server. Second is the signCount
, which is from byte 33 to 37. This is generated from the authenticator and should be compared to its previous value to ensure that nothing fishy is going on with the key. The value should always 0 when it’s a multi-device passkey and should be randomly larger than the previous signCount when it’s a single-device passkey.
Once you’ve asserted your login, you should be logged in — congratulations! Passkeys is a pretty great protocol, but it does come with some caveats.
There’s a lot of upside to Passkeys, however, there are some issues with it at the time of this writing. For one thing, passkeys is somewhat still early support-wise, with only single-device credentials allowed on Windows and very little support for Linux systems. Passkeys.dev provides a nice table that’s sort of like the Caniuse of this protocol.
Also, Google’s and Apple’s passkeys platforms do not communicate with each other. If you want to get your credentials from your Android phone over to your iPhone… well, you’re out of luck for now. That’s not to say there is no interoperability! You can log in to your computer by using your phone as an authenticator. But it would be much cleaner just to have it built into the operating system and synced without it being locked at the vendor level.
What does the passkeys protocol of the future look like? It looks pretty good! Once it gains support from more operating systems, there should be an uptake in usage, and you’ll start seeing it used more and more in the wild. Some password managers are even going to support them first-hand.
Passkeys are by no means only supported on the web. Android and iOS will both support native passkeys as first-class citizens. We’re still in the early days of all this, but expect to see it mentioned more and more.
After all, we eliminate the need for passwords, and by doing so, make the world safer for it!
Here are some more resources if you want to learn more about Passkeys. There’s also a repository and demo I put together for this article.
Passkeys: What the Heck and Why? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]>Some Cross-Browser DevTools Features You Might Not Know originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]>But there are quite a few DevTools features that are interoperable, even some lesser-known ones that I’m about to share with you.
For the sake of brevity, I use “Chromium” to refer to all Chromium-based browsers, like Chrome, Edge, and Opera, in the article. Many of the DevTools in them offer the exact same features and capabilities as one another, so this is just my shorthand for referring to all of them at once.
Sometimes the DOM tree is full of nodes nested in nodes that are nested in other nodes, and so on. That makes it pretty tough to find the exact one you’re looking for, but you can quickly search the DOM tree using Cmd
+ F
(macOS) or Ctrl
+ F
(Windows).
Additionally, you can also search using a valid CSS selector, like .red
, or using an XPath, like //div/h1
.
In Chromium browsers, the focus automatically jumps to the node that matches the search criteria as you type, which could be annoying if you are working with longer search queries or a large DOM tree. Fortunately, you can disable this behavior by heading to Settings (F1
) → Preferences → Global → Search as you type → Disable.
After you have located the node in the DOM tree, you can scroll the page to bring the node within the viewport by right-clicking on the nod, and selecting “Scroll into view”.
DevTools provides many different ways to access a DOM node directly from the console.
For example, you can use $0
to access the currently selected node in the DOM tree. Chromium browsers take this one step further by allowing you to access nodes selected in the reverse chronological order of historic selection using, $1
, $2
, $3
, etc.
Another thing that Chromium browsers allow you to do is copy the node path as a JavaScript expression in the form of document.querySelector
by right-clicking on the node, and selecting Copy → Copy JS path, which can then be used to access the node in the console.
Here’s another way to access a DOM node directly from the console: as a temporary variable. This option is available by right-clicking on the node and selecting an option. That option is labeled differently in each browser’s DevTools:
DevTools can help visualize elements that match certain properties by displaying a badge next to the node. Badges are clickable, and different browsers offer a variety of different badges.
In Safari, there is a badge button in the Elements panel toolbar which can be used to toggle the visibility of specific badges. For example, if a node has a display: grid
or display: inline-grid
CSS declaration applied to it, a grid
badge is displayed next to it. Clicking on the badge will highlight grid areas, track sizes, line numbers, and more, on the page.
The badges that are currently supported in Firefox’s DevTools are listed in the Firefox source docs. For example, a scroll
badge indicates a scrollable element. Clicking on the badge highlights the element causing the overflow with an overflow
badge next to it.
In Chromium browsers, you can right-click on any node and select “Badge settings…” to open a container that lists all of the available badges. For example, elements with scroll-snap-type
will have a scroll-snap
badge next to it, which on click, will toggle the scroll-snap
overlay on that element.
We’ve been able to take screenshots from some DevTools for a while now, but it’s now available in all of them and includes new ways to take full-page shots.
The process starts by right-clicking on the DOM node you want to capture. Then select the option to capture the node, which is labeled differently depending on which DevTools you’re using.
Repeat the same steps on the html
node to take a full-page screenshot. When you do, though, it’s worth noting that Safari retains the transparency of the element’s background color — Chromium and Firefox will capture it as a white background.
There’s another option! You can take a “responsive” screenshot of the page, which allows you to capture the page at a specific viewport width. As you might expect, each browser has different ways to get there.
Cmd
+ Shift
+ M
(macOS) or Ctrl
+ Shift
+ M
(Windows). Or click the “Devices” icon next to the “Inspect” icon.Chrome lets you visualize and inspect top-layer elements, like a dialog, alert, or modal. When an element is added to the #top-layer
, it gets a top-layer
badge next to it, which on click, jumps you to the top-layer container located just after the </html>
tag.
The order of the elements in the top-layer
container follows the stacking order, which means the last one is on the top. Click the reveal
badge to jump back to the node.
Firefox links the element referencing the ID attribute to its target element in the same DOM and highlights it with an underline. Use CMD
+ Click
(macOS) or CTRL
+ Click
(Windows) )to jump to the target element with the identifier.
Quite a few things, right? It’s awesome that there are some incredibly useful DevTools features that are supported in Chromium, Firefox, and Safari alike. Are there any other lesser-known features supported by all three that you like?
There are a few resources I keep close by to stay on top of what’s new. I thought I’d share them with here:
Some Cross-Browser DevTools Features You Might Not Know originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]>Making Calendars With Accessibility and Internationalization in Mind originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]>There are many considerations when building a calendar component — far more than what is covered in the articles I linked up. If you think about it, calendars are fraught with nuance, from handling timezones and date formats to localization and even making sure dates flow from one month to the next… and that’s before we even get into accessibility and additional layout considerations depending on where the calendar is displayed and whatnot.
Many developers fear the Date()
object and stick with older libraries like moment.js
. But while there are many “gotchas” when it comes to dates and formatting, JavaScript has a lot of cool APIs and stuff to help out!
I don’t want to re-create the wheel here, but I will show you how we can get a dang good calendar with vanilla JavaScript. We’ll look into accessibility, using semantic markup and screenreader-friendly <time>
-tags — as well as internationalization and formatting, using the Intl.Locale
, Intl.DateTimeFormat
and Intl.NumberFormat
-APIs.
In other words, we’re making a calendar… only without the extra dependencies you might typically see used in a tutorial like this, and with some of the nuances you might not typically see. And, in the process, I hope you’ll gain a new appreciation for newer things that JavaScript can do while getting an idea of the sorts of things that cross my mind when I’m putting something like this together.
What should we call our calendar component? In my native language, it would be called “kalender element”, so let’s use that and shorten that to “Kal-El” — also known as Superman’s name on the planet Krypton.
Let’s create a function to get things going:
function kalEl(settings = {}) { ... }
This method will render a single month. Later we’ll call this method from [...Array(12).keys()]
to render an entire year.
One of the common things a typical online calendar does is highlight the current date. So let’s create a reference for that:
const today = new Date();
Next, we’ll create a “configuration object” that we’ll merge with the optional settings
object of the primary method:
const config = Object.assign(
{
locale: (document.documentElement.getAttribute('lang') || 'en-US'),
today: {
day: today.getDate(),
month: today.getMonth(),
year: today.getFullYear()
}
}, settings
);
We check, if the root element (<html>
) contains a lang
-attribute with locale info; otherwise, we’ll fallback to using en-US
. This is the first step toward internationalizing the calendar.
We also need to determine which month to initially display when the calendar is rendered. That’s why we extended the config
object with the primary date
. This way, if no date is provided in the settings
object, we’ll use the today
reference instead:
const date = config.date ? new Date(config.date) : today;
We need a little more info to properly format the calendar based on locale. For example, we might not know whether the first day of the week is Sunday or Monday, depending on the locale. If we have the info, great! But if not, we’ll update it using the Intl.Locale
API. The API has a weekInfo
object that returns a firstDay
property that gives us exactly what we’re looking for without any hassle. We can also get which days of the week are assigned to the weekend
:
if (!config.info) config.info = new Intl.Locale(config.locale).weekInfo || {
firstDay: 7,
weekend: [6, 7]
};
Again, we create fallbacks. The “first day” of the week for en-US
is Sunday, so it defaults to a value of 7
. This is a little confusing, as the getDay
method in JavaScript returns the days as [0-6]
, where 0
is Sunday… don’t ask me why. The weekends are Saturday and Sunday, hence [6, 7]
.
Before we had the Intl.Locale
API and its weekInfo
method, it was pretty hard to create an international calendar without many **objects and arrays with information about each locale or region. Nowadays, it’s easy-peasy. If we pass in en-GB
, the method returns:
// en-GB
{
firstDay: 1,
weekend: [6, 7],
minimalDays: 4
}
In a country like Brunei (ms-BN
), the weekend is Friday and Sunday:
// ms-BN
{
firstDay: 7,
weekend: [5, 7],
minimalDays: 1
}
You might wonder what that minimalDays
property is. That’s the fewest days required in the first week of a month to be counted as a full week. In some regions, it might be just one day. For others, it might be a full seven days.
Next, we’ll create a render
method within our kalEl
-method:
const render = (date, locale) => { ... }
We still need some more data to work with before we render anything:
const month = date.getMonth();
const year = date.getFullYear();
const numOfDays = new Date(year, month + 1, 0).getDate();
const renderToday = (year === config.today.year) && (month === config.today.month);
The last one is a Boolean
that checks whether today
exists in the month we’re about to render.
We’re going to get deeper in rendering in just a moment. But first, I want to make sure that the details we set up have semantic HTML tags associated with them. Setting that up right out of the box gives us accessibility benefits from the start.
First, we have the non-semantic wrapper: <kal-el>
. That’s fine because there isn’t a semantic <calendar>
tag or anything like that. If we weren’t making a custom element, <article>
might be the most appropriate element since the calendar could stand on its own page.
The <time>
element is going to be a big one for us because it helps translate dates into a format that screenreaders and search engines can parse more accurately and consistently. For example, here’s how we can convey “January 2023” in our markup:
<time datetime="2023-01">January <i>2023</i></time>
The row above the calendar’s dates containing the names of the days of the week can be tricky. It’s ideal if we can write out the full names for each day — e.g. Sunday, Monday, Tuesday, etc. — but that can take up a lot of space. So, let’s abbreviate the names for now inside of an <ol>
where each day is a <li>
:
<ol>
<li><abbr title="Sunday">Sun</abbr></li>
<li><abbr title="Monday">Mon</abbr></li>
<!-- etc. -->
</ol>
We could get tricky with CSS to get the best of both worlds. For example, if we modified the markup a bit like this:
<ol>
<li>
<abbr title="S">Sunday</abbr>
</li>
</ol>
…we get the full names by default. We can then “hide” the full name when space runs out and display the title
attribute instead:
@media all and (max-width: 800px) {
li abbr::after {
content: attr(title);
}
}
But, we’re not going that way because the Intl.DateTimeFormat
API can help here as well. We’ll get to that in the next section when we cover rendering.
Each date in the calendar grid gets a number. Each number is a list item (<li>
) in an ordered list (<ol>
), and the inline <time>
tag wraps the actual number.
<li>
<time datetime="2023-01-01">1</time>
</li>
And while I’m not planning to do any styling just yet, I know I will want some way to style the date numbers. That’s possible as-is, but I also want to be able to style weekday numbers differently than weekend numbers if I need to. So, I’m going to include data-*
attributes specifically for that: data-weekend
and data-today
.
There are 52 weeks in a year, sometimes 53. While it’s not super common, it can be nice to display the number for a given week in the calendar for additional context. I like having it now, even if I don’t wind up not using it. But we’ll totally use it in this tutorial.
We’ll use a data-weeknumber
attribute as a styling hook and include it in the markup for each date that is the week’s first date.
<li data-day="7" data-weeknumber="1" data-weekend="">
<time datetime="2023-01-08">8</time>
</li>
Let’s get the calendar on a page! We already know that <kal-el>
is the name of our custom element. First thing we need to configure it is to set the firstDay
property on it, so the calendar knows whether Sunday or some other day is the first day of the week.
<kal-el data-firstday="${ config.info.firstDay }">
We’ll be using template literals to render the markup. To format the dates for an international audience, we’ll use the Intl.DateTimeFormat
API, again using the locale
we specified earlier.
When we call the month
, we can set whether we want to use the long
name (e.g. February) or the short
name (e.g. Feb.). Let’s use the long
name since it’s the title above the calendar:
<time datetime="${year}-${(pad(month))}">
${new Intl.DateTimeFormat(
locale,
{ month:'long'}).format(date)} <i>${year}</i>
</time>
For weekdays displayed above the grid of dates, we need both the long
(e.g. “Sunday”) and short
(abbreviated, ie. “Sun”) names. This way, we can use the “short” name when the calendar is short on space:
Intl.DateTimeFormat([locale], { weekday: 'long' })
Intl.DateTimeFormat([locale], { weekday: 'short' })
Let’s make a small helper method that makes it a little easier to call each one:
const weekdays = (firstDay, locale) => {
const date = new Date(0);
const arr = [...Array(7).keys()].map(i => {
date.setDate(5 + i)
return {
long: new Intl.DateTimeFormat([locale], { weekday: 'long'}).format(date),
short: new Intl.DateTimeFormat([locale], { weekday: 'short'}).format(date)
}
})
for (let i = 0; i < 8 - firstDay; i++) arr.splice(0, 0, arr.pop());
return arr;
}
Here’s how we invoke that in the template:
<ol>
${weekdays(config.info.firstDay,locale).map(name => `
<li>
<abbr title="${name.long}">${name.short}</abbr>
</li>`).join('')
}
</ol>
And finally, the days, wrapped in an <ol>
element:
${[...Array(numOfDays).keys()].map(i => {
const cur = new Date(year, month, i + 1);
let day = cur.getDay(); if (day === 0) day = 7;
const today = renderToday && (config.today.day === i + 1) ? ' data-today':'';
return `
<li data-day="${day}"${today}${i === 0 || day === config.info.firstDay ? ` data-weeknumber="${new Intl.NumberFormat(locale).format(getWeek(cur))}"`:''}${config.info.weekend.includes(day) ? ` data-weekend`:''}>
<time datetime="${year}-${(pad(month))}-${pad(i)}" tabindex="0">
${new Intl.NumberFormat(locale).format(i + 1)}
</time>
</li>`
}).join('')}
Let’s break that down:
day
variable for the current day in the iteration.Intl.Locale
API and getDay()
.day
is equal to today
, we add a data-*
attribute.<li>
element as a string with merged data.tabindex="0"
makes the element focusable, when using keyboard navigation, after any positive tabindex values (Note: you should never add positive tabindex-values)To “pad” the numbers in the datetime
attribute, we use a little helper method:
const pad = (val) => (val + 1).toString().padStart(2, '0');
Again, the “week number” is where a week falls in a 52-week calendar. We use a little helper method for that as well:
function getWeek(cur) {
const date = new Date(cur.getTime());
date.setHours(0, 0, 0, 0);
date.setDate(date.getDate() + 3 - (date.getDay() + 6) % 7);
const week = new Date(date.getFullYear(), 0, 4);
return 1 + Math.round(((date.getTime() - week.getTime()) / 86400000 - 3 + (week.getDay() + 6) % 7) / 7);
}
I didn’t write this getWeek
-method. It’s a cleaned up version of this script.
And that’s it! Thanks to the Intl.Locale
, Intl.DateTimeFormat
and Intl.NumberFormat
APIs, we can now simply change the lang
-attribute of the <html>
element to change the context of the calendar based on the current region:
de-DE
fa-IR
zh-Hans-CN-u-nu-hanidec
You might recall how all the days are just one <ol>
with list items. To style these into a readable calendar, we dive into the wonderful world of CSS Grid. In fact, we can repurpose the same grid from a starter calendar template right here on CSS-Tricks, but updated a smidge with the :is()
relational pseudo to optimize the code.
Notice that I’m defining configurable CSS variables along the way (and prefixing them with ---kalel-
to avoid conflicts).
kal-el :is(ol, ul) {
display: grid;
font-size: var(--kalel-fz, small);
grid-row-gap: var(--kalel-row-gap, .33em);
grid-template-columns: var(--kalel-gtc, repeat(7, 1fr));
list-style: none;
margin: unset;
padding: unset;
position: relative;
}
Let’s draw borders around the date numbers to help separate them visually:
kal-el :is(ol, ul) li {
border-color: var(--kalel-li-bdc, hsl(0, 0%, 80%));
border-style: var(--kalel-li-bds, solid);
border-width: var(--kalel-li-bdw, 0 0 1px 0);
grid-column: var(--kalel-li-gc, initial);
text-align: var(--kalel-li-tal, end);
}
The seven-column grid works fine when the first day of the month is also the first day of the week for the selected locale). But that’s the exception rather than the rule. Most times, we’ll need to shift the first day of the month to a different weekday.
Remember all the extra data-*
attributes we defined when writing our markup? We can hook into those to update which grid column (--kalel-li-gc
) the first date number of the month is placed on:
[data-firstday="1"] [data-day="3"]:first-child {
--kalel-li-gc: 1 / 4;
}
In this case, we’re spanning from the first grid column to the fourth grid column — which will automatically “push” the next item (Day 2) to the fifth grid column, and so forth.
Let’s add a little style to the “current” date, so it stands out. These are just my styles. You can totally do what you’d like here.
[data-today] {
--kalel-day-bdrs: 50%;
--kalel-day-bg: hsl(0, 86%, 40%);
--kalel-day-hover-bgc: hsl(0, 86%, 70%);
--kalel-day-c: #fff;
}
I like the idea of styling the date numbers for weekends differently than weekdays. I’m going to use a reddish color to style those. Note that we can reach for the :not()
pseudo-class to select them while leaving the current date alone:
[data-weekend]:not([data-today]) {
--kalel-day-c: var(--kalel-weekend-c, hsl(0, 86%, 46%));
}
Oh, and let’s not forget the week numbers that go before the first date number of each week. We used a data-weeknumber
attribute in the markup for that, but the numbers won’t actually display unless we reveal them with CSS, which we can do on the ::before
pseudo-element:
[data-weeknumber]::before {
display: var(--kalel-weeknumber-d, inline-block);
content: attr(data-weeknumber);
position: absolute;
inset-inline-start: 0;
/* additional styles */
}
We’re technically done at this point! We can render a calendar grid that shows the dates for the current month, complete with considerations for localizing the data by locale, and ensuring that the calendar uses proper semantics. And all we used was vanilla JavaScript and CSS!
But let’s take this one more step…
Maybe you need to display a full year of dates! So, rather than render the current month, you might want to display all of the month grids for the current year.
Well, the nice thing about the approach we’re using is that we can call the render
method as many times as we want and merely change the integer that identifies the month on each instance. Let’s call it 12 times based on the current year.
as simple as calling the render
-method 12 times, and just change the integer for month
— i
:
[...Array(12).keys()].map(i =>
render(
new Date(date.getFullYear(),
i,
date.getDate()),
config.locale,
date.getMonth()
)
).join('')
It’s probably a good idea to create a new parent wrapper for the rendered year. Each calendar grid is a <kal-el>
element. Let’s call the new parent wrapper <jor-el>
, where Jor-El is the name of Kal-El’s father.
<jor-el id="app" data-year="true">
<kal-el data-firstday="7">
<!-- etc. -->
</kal-el>
<!-- other months -->
</jor-el>
We can use <jor-el>
to create a grid for our grids. So meta!
jor-el {
background: var(--jorel-bg, none);
display: var(--jorel-d, grid);
gap: var(--jorel-gap, 2.5rem);
grid-template-columns: var(--jorel-gtc, repeat(auto-fill, minmax(320px, 1fr)));
padding: var(--jorel-p, 0);
}
I read an excellent book called Making and Breaking the Grid the other day and stumbled on this beautiful “New Year’s poster”:
I figured we could do something similar without changing anything in the HTML or JavaScript. I’ve taken the liberty to include full names for months, and numbers instead of day names, to make it more readable. Enjoy!
Making Calendars With Accessibility and Internationalization in Mind originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]>5 Mistakes I Made When Starting My First React Project originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]>That’s how it was for me the first time I created a React project — and React is one of those frameworks with remarkable documentation, especially now with the beta docs. But I still struggled my way through. It’s been quite a while since that project, but the lessons I gained from it are still fresh in my mind. And even though there are a lot of React “how-to” tutorials in out there, I thought I’d share what I wish I knew when I first used it.
So, that’s what this article is — a list of the early mistakes I made. I hope they help make learning React a lot smoother for you.
TL;DR Use Vite or Parcel.
Create React App (CRA) is a tool that helps you set up a new React project. It creates a development environment with the best configuration options for most React projects. This means you don’t have to spend time configuring anything yourself.
As a beginner, this seemed like a great way to start my work! No configuration! Just start coding!
CRA uses two popular packages to achieve this, webpack and Babel. webpack is a web bundler that optimizes all of the assets in your project, such as JavaScript, CSS, and images. Babel is a tool that allows you to use newer JavaScript features, even if some browsers don’t support them.
Both are good, but there are newer tools that can do the job better, specifically Vite and Speedy Web Compiler (SWC).
These new and improved alternatives are faster and easier to configure than webpack and Babel. This makes it easier to adjust the configuration which is difficult to do in create-react-app without ejecting.
To use them both when setting up a new React project you have to make sure you have Node version 12 or higher installed, then run the following command.
npm create vite
You’ll be asked to pick a name for your project. Once you do that, select React from the list of frameworks. After that, you can select either Javascript + SWC
or Typescript + SWC
Then you’ll have to change directory cd
into your project and run the following command;
npm i && npm run dev
This should run a development server for your site with the URL localhost:5173
And it’s as simple as that.
defaultProps
for default valuesTL;DR Use default function parameters instead.
Data can be passed to React components through something called props
. These are added to a component just like attributes in an HTML element and can be used in a component’s definition by taking the relevant values from the prop object passed in as an argument.
// App.jsx
export default function App() {
return <Card title="Hello" description="world" />
}
// Card.jsx
function Card(props) {
return (
<div>
<h1>{props.title}</h1>
<p>{props.description}</p>
</div>
);
}
export default Card;
If a default value is ever required for a prop
, the defaultProp
property can be used:
// Card.jsx
function Card(props) {
// ...
}
Card.defaultProps = {
title: 'Default title',
description: 'Desc',
};
export default Card;
With modern JavaScript, it is possible to destructure the props
object and assign a default value to it all in the function argument.
// Card.jsx
function Card({title = "Default title", description= "Desc"}) {
return (
<div>
<h1>{title}</h1>
<p>{description}</p>
</div>
)
}
export default Card;
This is more favorable as the code that can be read by modern browsers without the need for extra transformation.
Unfortunately, defaultProps
do require some transformation to be read by the browser since JSX (JavaScript XML) isn’t supported out of the box. This could potentially affect the performance of an application that is using a lot of defaultProps
.
propTypes
TL;DR Use TypeScript.
In React, the propTypes
property can be used to check if a component is being passed the correct data type for its props. They allow you to specify the type of data that should be used for each prop such as a string, number, object, etc. They also allow you to specify if a prop is required or not.
This way, if a component is passed the wrong data type or if a required prop is not being provided, then React will throw an error.
// Card.jsx
import { PropTypes } from "prop-types";
function Card(props) {
// ...
}
Card.propTypes = {
title: PropTypes.string.isRequired,
description: PropTypes.string,
};
export default Card;
TypeScript provides a level of type safety in data that’s being passed to components. So, sure, propTypes
were a good idea back when I was starting. However, now that TypeScript has become the go-to solution for type safety, I would highly recommend using it over anything else.
// Card.tsx
interface CardProps {
title: string,
description?: string,
}
export default function Card(props: CardProps) {
// ...
}
TypeScript is a programming language that builds on top of JavaScript by adding static type-checking. TypeScript provides a more powerful type system, that can catch more potential bugs and improves the development experience.
TL;DR: Write components as functions
Class components in React are created using JavaScript classes. They have a more object-oriented structure and as well as a few additional features, like the ability to use the this
keyword and lifecycle methods.
// Card.jsx
class Card extends React.Component {
render() {
return (
<div>
<h1>{this.props.title}</h1>
<p>{this.props.description}</p>
</div>
)
}
}
export default Card;
I prefer writing components with classes over functions, but JavaScript classes are more difficult for beginners to understand and this
can get very confusing. Instead, I’d recommend writing components as functions:
// Card.jsx
function Card(props) {
return (
<div>
<h1>{props.title}</h1>
<p>{props.description}</p>
</div>
)
}
export default Card;
Function components are simply JavaScript functions that return JSX. They are much easier to read, and do not have additional features like the this
keyword and lifecycle methods which make them more performant than class components.
Function components also have the advantage of using hooks. React Hooks allow you to use state and other React features without writing a class component, making your code more readable, maintainable and reusable.
TL;DR: There’s no need to do it, unless you need hooks.
Since React 17 was released in 2020, it’s now unnecessary to import React at the top of your file whenever you create a component.
import React from 'react'; // Not needed!
export default function Card() {}
But we had to do that before React 17 because the JSX transformer (the thing that converts JSX into regular JavaScript) used a method called React.createElement
that would only work when importing React. Since then, a new transformer has been release which can transform JSX without the createElement
method.
You will still need to import React to use hooks, fragments, and any other functions or components you might need from the library:
import { useState } from 'react';
export default function Card() {
const [count, setCount] = useState(0);
// ...
}
Maybe “mistake” is too harsh a word since some of the better practices came about later. Still, I see plenty of instances where the “old” way of doing something is still being actively used in projects and other tutorials.
To be honest, I probably made way more than five mistakes when getting started. Anytime you reach for a new tool it is going to be more like a learning journey to use it effectively, rather than flipping a switch. But these are the things I still carry with me years later!
If you’ve been using React for a while, what are some of the things you wish you knew before you started? It would be great to get a collection going to help others avoid the same struggles.
5 Mistakes I Made When Starting My First React Project originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]>Creating a Clock with the New CSS sin() and cos() Trigonometry Functions originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]>sin()
and cos()
.
There are other trigonometry functions in the pipeline — including tan()
— so why focus just on sin()
and cos()
? They happen to be perfect for the idea I have in mind, which is to place text along the edge of a circle. That’s been covered here on CSS-Tricks when Chris shared an approach that uses a Sass mixin. That was six years ago, so let’s give it the bleeding edge treatment.
Here’s what I have in mind. Again, it’s only supported in Firefox and Safari at the moment:
So, it’s not exactly like words forming a circular shape, but we are placing text characters along the circle to form a clock face. Here’s some markup we can use to kick things off:
<div class="clock">
<div class="clock-face">
<time datetime="12:00">12</time>
<time datetime="1:00">1</time>
<time datetime="2:00">2</time>
<time datetime="3:00">3</time>
<time datetime="4:00">4</time>
<time datetime="5:00">5</time>
<time datetime="6:00">6</time>
<time datetime="7:00">7</time>
<time datetime="8:00">8</time>
<time datetime="9:00">9</time>
<time datetime="10:00">10</time>
<time datetime="11:00">11</time>
</div>
</div>
Next, here are some super basic styles for the .clock-face
container. I decided to use the <time>
tag with a datetime
attribute.
.clock {
--_ow: clamp(5rem, 60vw, 40rem);
--_w: 88cqi;
aspect-ratio: 1;
background-color: tomato;
border-radius: 50%;
container-type: inline;
display: grid;
height: var(--_ow);
place-content: center;
position: relative;
width var(--_ow);
}
I decorated things a bit in there, but only to get the basic shape and background color to help us see what we’re doing. Notice how we save the width
value in a CSS variable. We’ll use that later. Not much to look at so far:
It looks like some sort of modern art experiment, right? Let’s introduce a new variable, --_r
, to store the circle’s radius, which is equal to half of the circle’s width. This way, if the width (--_w
) changes, the radius value (--_r
) will also update — thanks to another CSS math function, calc()
:
.clock {
--_w: 300px;
--_r: calc(var(--_w) / 2);
/* rest of styles */
}
Now, a bit of math. A circle is 360 degrees. We have 12 labels on our clock, so want to place the numbers every 30 degrees (360 / 12
). In math-land, a circle begins at 3 o’clock, so noon is actually minus 90 degrees from that, which is 270 degrees (360 - 90
).
Let’s add another variable, --_d
, that we can use to set a degree value for each number on the clock face. We’re going to increment the values by 30 degrees to complete our circle:
.clock time:nth-child(1) { --_d: 270deg; }
.clock time:nth-child(2) { --_d: 300deg; }
.clock time:nth-child(3) { --_d: 330deg; }
.clock time:nth-child(4) { --_d: 0deg; }
.clock time:nth-child(5) { --_d: 30deg; }
.clock time:nth-child(6) { --_d: 60deg; }
.clock time:nth-child(7) { --_d: 90deg; }
.clock time:nth-child(8) { --_d: 120deg; }
.clock time:nth-child(9) { --_d: 150deg; }
.clock time:nth-child(10) { --_d: 180deg; }
.clock time:nth-child(11) { --_d: 210deg; }
.clock time:nth-child(12) { --_d: 240deg; }
OK, now’s the time to get our hands dirty with the sin()
and cos()
functions! What we want to do is use them to get the X and Y coordinates for each number so we can place them properly around the clock face.
The formula for the X coordinate is radius + (radius * cos(degree))
. Let’s plug that into our new --_x
variable:
--_x: calc(var(--_r) + (var(--_r) * cos(var(--_d))));
The formula for the Y coordinate is radius + (radius * sin(degree))
. We have what we need to calculate that:
--_y: calc(var(--_r) + (var(--_r) * sin(var(--_d))));
There are a few housekeeping things we need to do to set up the numbers, so let’s put some basic styling on them to make sure they are absolutely positioned and placed with our coordinates:
.clock-face time {
--_x: calc(var(--_r) + (var(--_r) * cos(var(--_d))));
--_y: calc(var(--_r) + (var(--_r) * sin(var(--_d))));
--_sz: 12cqi;
display: grid;
height: var(--_sz);
left: var(--_x);
place-content: center;
position: absolute;
top: var(--_y);
width: var(--_sz);
}
Notice --_sz
, which we’ll use for the width
and height
of the numbers in a moment. Let’s see what we have so far.
This definitely looks more like a clock! See how the top-left corner of each number is positioned at the correct place around the circle? We need to “shrink” the radius when calculating the positions for each number. We can deduct the size of a number (--_sz
) from the size of the circle (--_w
), before we calculate the radius:
--_r: calc((var(--_w) - var(--_sz)) / 2);
Much better! Let’s change the colors, so it looks more elegant:
We could stop right here! We accomplished the goal of placing text around a circle, right? But what’s a clock without arms to show hours, minutes, and seconds?
Let’s use a single CSS animation for that. First, let’s add three more elements to our markup,
<div class="clock">
<!-- after <time>-tags -->
<span class="arm seconds"></span>
<span class="arm minutes"></span>
<span class="arm hours"></span>
<span class="arm center"></span>
</div>
Then some common markup for all three arms. Again, most of this is just make sure the arms are absolutely positioned and placed accordingly:
.arm {
background-color: var(--_abg);
border-radius: calc(var(--_aw) * 2);
display: block;
height: var(--_ah);
left: calc((var(--_w) - var(--_aw)) / 2);
position: absolute;
top: calc((var(--_w) / 2) - var(--_ah));
transform: rotate(0deg);
transform-origin: bottom;
width: var(--_aw);
}
We’ll use the same animation for all three arms:
@keyframes turn {
to {
transform: rotate(1turn);
}
}
The only difference is the time the individual arms take to make a full turn. For the hours arm, it takes 12 hours to make a full turn. The animation-duration
property only accepts values in milliseconds and seconds. Let’s stick with seconds, which is 43,200 seconds (60 seconds * 60 minutes * 12 hours
).
animation: turn 43200s infinite;
It takes 1 hour for the minutes arm to make a full turn. But we want this to be a multi-step animation so the movement between the arms is staggered rather than linear. We’ll need 60 steps, one for each minute:
animation: turn 3600s steps(60, end) infinite;
The seconds arm is almost the same as the minutes arm, but the duration is 60 seconds instead of 60 minutes:
animation: turn 60s steps(60, end) infinite;
Let’s update the properties we created in the common styles:
.seconds {
--_abg: hsl(0, 5%, 40%);
--_ah: 145px;
--_aw: 2px;
animation: turn 60s steps(60, end) infinite;
}
.minutes {
--_abg: #333;
--_ah: 145px;
--_aw: 6px;
animation: turn 3600s steps(60, end) infinite;
}
.hours {
--_abg: #333;
--_ah: 110px;
--_aw: 6px;
animation: turn 43200s linear infinite;
}
What if we want to start at the current time? We need a little bit of JavaScript:
const time = new Date();
const hour = -3600 * (time.getHours() % 12);
const mins = -60 * time.getMinutes();
app.style.setProperty('--_dm', `${mins}s`);
app.style.setProperty('--_dh', `${(hour+mins)}s`);
I’ve added id="app"
to the clockface and set two new custom properties on it that set a negative animation-delay
, as Mate Marschalko did when he shared a CSS-only clock. The getHours()
method of JavaScipt’s Date
object is using the 24-hour format, so we use the remainder
operator to convert it into 12-hour format.
In the CSS, we need to add the animation-delay
as well:
.minutes {
animation-delay: var(--_dm, 0s);
/* other styles */
}
.hours {
animation-delay: var(--_dh, 0s);
/* other styles */
}
Just one more thing. Using CSS @supports
and the properties we’ve already created, we can provide a fallback to browsers that do not supprt sin()
and cos()
. (Thank you, Temani Afif!):
@supports not (left: calc(1px * cos(45deg))) {
time {
left: 50% !important;
top: 50% !important;
transform: translate(-50%,-50%) rotate(var(--_d)) translate(var(--_r)) rotate(calc(-1*var(--_d)))
}
}
And, voilà! Our clock is done! Here’s the final demo one more time. Again, it’s only supported in Firefox and Safari at the moment.
Just messing around here, but we can quickly turn our clock into a circular image gallery by replacing the <time>
tags with <img>
then updating the width (--_w
) and radius (--_r
) values:
Let’s try one more. I mentioned earlier how the clock looked kind of like a modern art experiment. We can lean into that and re-create a pattern I saw on a poster (that I unfortunately didn’t buy) in an art gallery the other day. As I recall, it was called “Moon” and consisted of a bunch of dots forming a circle.
We’ll use an unordered list this time since the circles don’t follow a particular order. We’re not even going to put all the list items in the markup. Instead, let’s inject them with JavaScript and add a few controls we can use to manipulate the final result.
The controls are range inputs (<input type="range">)
which we’ll wrap in a <form>
and listen for the input
event.
<form id="controls">
<fieldset>
<label>Number of rings
<input type="range" min="2" max="12" value="10" id="rings" />
</label>
<label>Dots per ring
<input type="range" min="5" max="12" value="7" id="dots" />
</label>
<label>Spread
<input type="range" min="10" max="40" value="40" id="spread" />
</label>
</fieldset>
</form>
We’ll run this method on “input”, which will create a bunch of <li>
elements with the degree (--_d
) variable we used earlier applied to each one. We can also repurpose our radius variable (--_r
) .
I also want the dots to be different colors. So, let’s randomize (well, not completely randomized) the HSL color value for each list item and store it as a new CSS variable, --_bgc
:
const update = () => {
let s = "";
for (let i = 1; i <= rings.valueAsNumber; i++) {
const r = spread.valueAsNumber * i;
const theta = coords(dots.valueAsNumber * i);
for (let j = 0; j < theta.length; j++) {
s += `<li style="--_d:${theta[j]};--_r:${r}px;--_bgc:hsl(${random(
50,
25
)},${random(90, 50)}%,${random(90, 60)}%)"></li>`;
}
}
app.innerHTML = s;
}
The random()
method picks a value within a defined range of numbers:
const random = (max, min = 0, f = true) => f ? Math.floor(Math.random() * (max - min) + min) : Math.random() * max;
And that’s it. We use JavaScript to render the markup, but as soon as it’s rendered, we don’t really need it. The sin()
and cos()
functions help us position all the dots in the right spots.
Placing things around a circle is a pretty basic example to demonstrate the powers of trigonometry functions like sin()
and cos()
. But it’s really cool that we are getting modern CSS features that provide new solutions for old workarounds I’m sure we’ll see way more interesting, complex, and creative use cases, especially as browser support comes to Chrome and Edge.
Creating a Clock with the New CSS sin() and cos() Trigonometry Functions originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]>Managing Fonts in WordPress Block Themes originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]>That’s what we’re going to look at in this article. Block themes can indeed use Google Fonts, but the process for registering them is way different than what you might have done before in classic themes.
As I said, there’s little for us to go on as far as getting started. The Twenty Twenty-Two theme is the first block-based default WordPress theme, and it demonstrates how we can use downloaded font files as assets in the theme. But it’s pretty unwieldy because it involves a couple of steps: (1) register the files in the functions.php
file and (2) define the bundled fonts in the theme.json
file.
Since Twenty Twenty-Two was released, though, the process has gotten simpler. Bundled fonts can now be defined without registering them, as shown in the Twenty Twenty-Three theme. However, the process still requires us to manually download font files and bundle them into the themes. That’s a hindrance that sort of defeats the purpose of simple, drop-in, hosted fonts that are served on a speedy CDN.
If you didn’t already know, the Gutenberg project is an experimental plugin where features being developed for the WordPress Block and Site Editor are available for early use and testing. In a recent Theme Shaper article, Gutenberg project lead architect Matias Ventura discusses how Google Fonts — or any other downloaded fonts, for that matter — can be added to block themes using the Create Block Theme plugin.
This short video at Learn WordPress provides a good overview of the Create Block Theme plugin and how it works. But the bottom line is that it does what it says on the tin: it creates block themes. But it does it by providing controls in the WordPress UI that allow you to create an entire theme, child theme, or a theme style variation without writing any code or ever having to touch template files.
I’ve given it a try! And since Create Block Theme is authored and maintained by the WordPress.org team, I’d say it’s the best direction we have for integrating Google Fonts into a theme. That said, it’s definitely worth noting that the plugin is in active development. That means things could change pretty quickly.
Before I get to how it all works, let’s first briefly refresh ourselves with the “traditional” process for adding Google Fonts to classic WordPress themes.
This ThemeShaper article from 2014 provides an excellent example of how we used to do this in classic PHP themes, as is this newer Cloudways article by Ibad Ur Rehman.
To refresh our memory, here is an example from the default Twenty Seventeen theme showing how Google fonts are enqueued in the functions.php
file.
function twentyseventeen_fonts_url() {
$fonts_url = '';
/**
* Translators: If there are characters in your language that are not
* supported by Libre Franklin, translate this to 'off'. Do not translate
* into your own language.
*/
$libre_franklin = _x( 'on', 'libre_franklin font: on or off', 'twentyseventeen' );
if ( 'off' !== $libre_franklin ) {
$font_families = array();
$font_families[] = 'Libre Franklin:300,300i,400,400i,600,600i,800,800i';
$query_args = array(
'family' => urlencode( implode( '|', $font_families ) ),
'subset' => urlencode( 'latin,latin-ext' ),
);
$fonts_url = add_query_arg( $query_args, 'https://fonts.googleapis.com/css' );
}
return esc_url_raw( $fonts_url );
}
Then Google Fonts is pre-connected to the theme like this:
function twentyseventeen_resource_hints( $urls, $relation_type ) {
if ( wp_style_is( 'twentyseventeen-fonts', 'queue' ) && 'preconnect' === $relation_type ) {
$urls[] = array(
'href' => 'https://fonts.gstatic.com',
'crossorigin',
);
}
return $urls;
}
add_filter( 'wp_resource_hints', 'twentyseventeen_resource_hints', 10, 2 );
Great, right? There’s a hitch, however. In January 2022, a German regional court imposed a fine on a website owner for violating Europe’s GDPR requirements. The issue? Enqueuing Google Fonts on the site exposed a visitor’s IP address, jeopardizing user privacy. CSS-Tricks covered this a while back.
The Create Block Theme plugin satisfies GDPR privacy requirements, as it leverages the Google Fonts API to serve solely as a proxy for the local vendor. The fonts are served to the user on the same website rather than on Google’s servers, protecting privacy. WP Tavern discusses the German court ruling and includes links to guides for self-hosting Google Fonts.
This brings us to today’s “modern” way of using Google Fonts with WordPress block themes. First, let’s set up a local test site. I use Flywheel’s Local app for local development. You can use that or whatever you prefer, then use the Theme Test Data plugin by the WordPress Themes Team to work with dummy content. And, of course, you’ll want the Create Block Theme plugin in there as well.
Have you installed and activated those plugins? If so, navigate to Appearance → Manage theme fonts from the WordPress admin menu.
The “Manage theme fonts” screen displays a list of any fonts already defined in the theme’s theme.json
file. There are also two options at the top of the screen:
I’m using a completely blank theme by WordPress called Emptytheme. You’re welcome to roll along with your own theme, but I wanted to call out that I’ve renamed Emptytheme to “EMPTY-BLANK” and modified it, so there are no predefined fonts and styles at all.
I thought I’d share a screenshot of my theme’s file structure and theme.json
file to show that there are literally no styles or configurations going on.
theme.json
file (right)Let’s click the “Add Google Fonts” button. It takes us to a new page with options to choose any available font from the current Google Fonts API.
For this demo, I selected Inter from the menu of options and selected the 300, Regular, and 900 weights from the preview screen:
Once I’ve saved my selections, the Inter font styles I selected are automatically downloaded and stored in the theme’s assets/fonts
folder:
Notice, too, how those selections have been automatically written to the theme.json
file in that screenshot. The Create Block Theme plugin even adds the path to the font files.
theme.json
code {
"version": 2,
"settings": {
"appearanceTools": true,
"layout": {
"contentSize": "840px",
"wideSize": "1100px"
},
"typography": {
"fontFamilies": [
{
"fontFamily": "Inter",
"slug": "inter",
"fontFace": [
{
"fontFamily": "Inter",
"fontStyle": "normal",
"fontWeight": "300",
"src": [
"file:./assets/fonts/inter_300.ttf"
]
},
{
"fontFamily": "Inter",
"fontStyle": "normal",
"fontWeight": "900",
"src": [
"file:./assets/fonts/inter_900.ttf"
]
},
{
"fontFamily": "Inter",
"fontStyle": "normal",
"fontWeight": "400",
"src": [
"file:./assets/fonts/inter_regular.ttf"
]
}
]
}
]
}
}
}
If we go to the Create Block Theme’s main screen and click the Manage theme fonts button again, we will see Inter’s 300, 400 (Regular), and 900 weight variants displayed in the preview panel.
A demo text preview box at the top even allows you to preview the selected fonts within the sentence, header, and paragraph with the font size selection slider. You can check out this new feature in action in this GitHub video.
The selected font(s) are also available in the Site Editor Global Styles (Appearance → Editor), specifically in the Design panel.
From here, navigate to Templates → Index and click the blue Edit button to edit the index.html
template. We want to open the Global Styles settings, which are represented as a contrast icon located at the top-right of the screen. When we click the Text settings and open the Font menu in the Typography section… we see Inter!
We may as well look at adding local fonts to a theme since the Create Block Theme plugin provides that option. The benefit is that you can use any font file you want from whatever font service you prefer.
Without the plugin, we’d have to grab our font files, drop them somewhere in the theme folder, then resort to the traditional PHP route of enqueuing them in the functions.php
file. But we can let WordPress carry that burden for us by uploading the font file on the Add local fonts screen using the Create Block Theme interface. Once a file is selected to upload, font face definitions boxes are filled automatically.
Even though we can use any .ttf
, .woff
, or .woff2
file, I simply downloaded Open Sans font files from Google Fonts for this exercise. I snatched two weight variations, regular and 800.
The same auto-magical file management and theme.json
update we saw with the Google Fonts option happens once again when we upload the font files (which are done one at a time). Check out where the fonts landed in my theme folder and how they are added to theme.json
:
The plugin also allows us to remove font files from a block theme from the WordPress admin. Let’s delete one of the Open Sans variants we installed in the last section to see how that works.
Clicking the Remove links triggers a warning for you to confirm the deletion. We’ll click OK to continue.
Let’s open our theme folder and check the theme.json
file. Sure enough, the Open Sans 800 file we deleted on the plugin screen removed the font file from the theme folder, and the reference to it is long gone in theme.json
.
There’s talk going on adding this “Font Manager” feature to WordPress Core rather than needing a separate plugin.
An initial iteration of the feature is available in the repo, and it uses the exact same approach we used in this article. It should be GDPR-compliant, too. The feature is scheduled to land with WordPress 6.3 release later this year.
The Create Block Theme plugin significantly enhances the user experience when it comes to handling fonts in WordPress block themes. The plugin allows us to add or delete any fonts while respecting GDPR requirements.
We saw how selecting a Google Font or uploading a local font file automatically places the font in the theme folder and registers it in the theme.json
file. We also saw how the font is an available option in the Global Styles settings in the Site Editor. And if we need to remove a font? The plugin totally takes care of that as well — without touching theme files or code.
Thanks for reading! If you have any comments or suggestions, share them in the comments. I’d love to know what you think of this possible direction for font management in WordPress.
I relied on a lot of research to write this article and thought I’d share the articles and resources I used to provide you with additional context.
Managing Fonts in WordPress Block Themes originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]>::marker
section of the article. The built-in list markers are bullets, ordinal numbers, and letters. The ::marker
pseudo-element …
Everything You Need to Know About the Gap After the List Marker originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]>::marker
section of the article. The built-in list markers are bullets, ordinal numbers, and letters. The ::marker
pseudo-element allows us to style these markers or replace them with a custom character or image.
::marker {
content: url('/marker.svg') ' ';
}
The example that caught my attention uses an SVG icon as a custom marker for the list items. But there’s also a single space character (" "
) in the CSS value next to the url()
function. The purpose of this space seems to be to insert a gap after the custom marker.
When I saw this code, I immediately wondered if there was a better way to create the gap. Appending a space to content
feels more like a workaround than the optimal solution. CSS provides margin
and padding
and other standard ways to space out elements on the page. Could none of these properties be used in this situation?
First, I tried to substitute the space character with a proper margin:
::marker {
content: url('/marker.svg');
margin-right: 1ch;
}
This didn’t work. As it turns out, ::marker
only supports a small set of mostly text-related CSS properties. For example, you can change the font-size
and color
of the marker, and define a custom marker by setting content
to a string or URL, as shown above. But the margin
and padding
properties are not supported, so setting them has no effect. What a disappointment.
Could it really be that a space character is the only way to insert a gap after a custom marker? I needed to find out. As I researched this topic, I made a few interesting discoveries that I’d like to share in this article.
First, let’s confirm what margin
and padding
do on the <ul>
and <li>
elements. I’ve created a test page for this purpose. Drag the relevant sliders and observe the effect on the spacing on each side of the list marker. Tip: Use the Reset button liberally to reset all controls to their initial values.
Note: Browsers apply a default padding-inline-left
of 40px
to <ol>
and <ul>
elements. The logical padding-inline-left
property is equivalent to the physical padding-left
property in writing systems with a left-to-right inline direction. In this article, I’m going to use physical properties for the sake of simplicity.
As you can see, padding-left
on <li>
increases the gap after the list marker. The other three properties control the spacing to the left of the marker, in other words, the indentation of the list item.
Notice that even when the list item’s padding-left
is 0px
, there is still a minimum gap after the marker. This gap cannot be decreased with margin
or padding
. The exact length of the minimum gap depends on the browser.
To sum up, the list item’s content is positioned at a browser-specific minimum distance from the marker, and this gap can be further increased by adding a padding-left
to <li>
.
Next, let’s see what happens when we position the marker inside the list item.
The list-style-position
property accepts two keywords: outside
, which is the default, and inside
, which moves the marker inside the list item. The latter is useful for creating designs with full-width list items.
If the marker is now inside the list item, does this mean that padding-left
on <li>
no longer increases the gap after the marker? Let’s find out. On my test page, turn on list-style-position: inside
via the checkbox. How are the four padding
and margin
properties affected by this change?
As you can see, padding-left
on <li>
now increases the spacing to the left of the marker. This means that we’ve lost the ability to increase the gap after the marker. In this situation, it would be useful to be able to add margin-right
to the ::marker
itself, but that doesn’t work, as we’ve established above.
Additionally, there’s a bug in Chromium that causes the gap after the marker to triple after switching to inside
positioning. By default, the length of the gap is about one-third of the text size. So at a default font-size
of 16px
, the gap is about 5.5px
. After switching to inside
, the gap grows to the full 16px
in Chrome. This bug affects the disc
, circle
, and square
markers, but not ordinal number markers.
The following image shows the default rendering of outside and inside-positioned list markers across three major browsers on macOS. For your convenience, I’ve horizontally aligned all list items on their markers to make it easier to compare the differences in gap sizes.
To sum up, switching to list-style-position: inside
introduces two problems. We can no longer increase the gap via padding-left
on <li>
, and the gap size is inconsistent between browsers.
Finally, let’s see what happens when we replace the default list marker with a custom marker.
There are two ways to define a custom marker:
list-style-type
and list-style-image
propertiescontent
property on the ::marker
pseudo-elementThe content
property is more powerful. For example, it allows us to use the counter()
function to access the list item’s ordinal number (the implicit list-item
counter) and decorate it with custom strings.
Unfortunately, Safari doesn’t support the content
property on ::marker
yet (WebKit bug). For this reason, I’m going to use the list-style-type
property to define the custom marker. You can still use the ::marker
selector to style the custom marker declared via list-style-type
. That aspect of ::marker
is supported in Safari.
Any Unicode character can potentially serve as a custom list marker, but only a small set of characters actually have “Bullet” in their official name, so I thought I’d compile them here for reference.
Character | Name | Code point | CSS keyword |
---|---|---|---|
• | Bullet | U+2022 | disc |
‣ | Triangular Bullet | U+2023 | |
⁃ | Hyphen Bullet | U+2043 | |
⁌ | Black Leftwards Bullet | U+204C | |
⁍ | Black Rightwards Bullet | U+204D | |
◘ | Inverse Bullet | U+25D8 | |
◦ | White Bullet | U+25E6 | circle |
☙ | Reversed Rotated Floral Heart Bullet | U+2619 | |
❥ | Rotated Heavy Black Heart Bullet | U+2765 | |
❧ | Rotated Floral Heart Bullet | U+2767 | |
⦾ | Circled White Bullet | U+29BE | |
⦿ | Circled Bullet | U+29BF |
Note: The CSS square
keyword does not have a corresponding “Bullet” character in Unicode. The character that comes closest is the Black Small Square (▪️) emoji (U+25AA
).
Now let’s see what happens when we replace the default list marker with list-style-type: "•"
(U+2022
Bullet). This is the same character as the default bullet, so there shouldn’t be any major rendering differences. On my test page, turn on the list-style-type
option and observe any changes to the marker.
As you can see, there are two significant changes:
font-size
.According to CSS Counter Styles Level 3, the default list marker (disc
) should be “similar to • U+2022
BULLET”. It seems that browsers increase the size of the default bullet to make it more legible. Firefox even uses a special font, -moz-bullet-font
, for the marker.
Can the small size problem be fixed with CSS? On my test page, turn on marker styling and observe what happens when you change the font-size
, line-height
, and font-family
of the marker.
As you can see, increasing the font-size
causes the custom marker to become vertically misaligned, and this cannot be corrected by decreasing the line-height
. The vertical-align
property, which could easily fix this problem, is not supported on ::marker
.
But did you notice that changing the font-family
can cause the marker to become bigger? Try setting it to Tahoma
. This could potentially be a good-enough workaround for the small-size problem, although I haven’t tested which font works best across the major browsers and operating systems.
You may also have noticed that the Chromium bug doesn’t occur anymore when you position the marker inside the list item. This means that a custom marker can serve as a workaround for this bug. And this leads me to the main problem, and the reason why I started researching this topic. If you define a custom marker and position it inside the list item, there is no gap after the marker and no way to insert a gap by standard means.
::marker
doesn’t support padding
or margin
.padding-left
on <li>
doesn’t increase the gap, since the marker is positioned inside
.Here’s a summary of all the key facts that I’ve mentioned in the article:
padding-inline-start
of 40px
to <ul>
and <ol>
elements.disc
, decimal
, etc.). There is no minimum gap after custom markers (string or URL).padding-left
to <ul>
, but only if the marker is positioned outside the list item (the default mode).font-family
on ::marker
can increase their size.Looking back at the code example from the beginning of the article, I think I understand now why there’s a space character in the content
value. There is just no better way to insert a gap after the SVG marker. It’s a workaround that is needed because no amount of margin
and padding
can create a gap after a custom marker that is positioned inside the list item. A margin-right
on ::marker
could easily do it, but that is not supported.
Until ::marker
adds support for more properties, web developers will often have no choice but to hide the marker and emulate it with a ::before
pseudo-element. I had to do that myself recently because I couldn’t change the marker’s background-color
. Hopefully, we won’t have to wait too long for a more powerful ::marker
pseudo-element.
Everything You Need to Know About the Gap After the List Marker originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]>Inspired by a colleague’s experiments, I recently set about writing a simple auto-loader: Whenever a custom …
An Approach to Lazy Loading Custom Elements originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]>Inspired by a colleague’s experiments, I recently set about writing a simple auto-loader: Whenever a custom element appears in the DOM, we wanna load the corresponding implementation if it’s not available yet. The browser then takes care of upgrading such elements from there on out.
Chances are you won’t actually need all this; there’s usually a simpler approach. Used deliberately, the techniques shown here might still be a useful addition to your toolset.
For consistency, we want our auto-loader to be a custom element as well — which also means we can easily configure it via HTML. But first, let’s identify those unresolved custom elements, step by step:
class AutoLoader extends HTMLElement {
connectedCallback() {
let scope = this.parentNode;
this.discover(scope);
}
}
customElements.define("ce-autoloader", AutoLoader);
Assuming we’ve loaded this module up-front (using async
is ideal), we can drop a <ce-autoloader>
element into the <body>
of our document. That will immediately start the discovery process for all child elements of <body>
, which now constitutes our root element. We could limit discovery to a subtree of our document by adding <ce-autoloader>
to the respective container element instead — indeed, we might even have multiple instances for different subtrees.
Of course, we still have to implement that discover
method (as part of the AutoLoader
class above):
discover(scope) {
let candidates = [scope, ...scope.querySelectorAll("*")];
for(let el of candidates) {
let tag = el.localName;
if(tag.includes("-") && !customElements.get(tag)) {
this.load(tag);
}
}
}
Here we check our root element along with every single descendant (*
). If it’s a custom element — as indicated by hyphenated tags — but not yet upgraded, we’ll attempt to load the corresponding definition. Querying the DOM that way might be expensive, so we should be a little careful. We can alleviate load on the main thread by deferring this work:
connectedCallback() {
let scope = this.parentNode;
requestIdleCallback(() => {
this.discover(scope);
});
}
requestIdleCallback
is not universally supported yet, but we can use requestAnimationFrame
as a fallback:
let defer = window.requestIdleCallback || requestAnimationFrame;
class AutoLoader extends HTMLElement {
connectedCallback() {
let scope = this.parentNode;
defer(() => {
this.discover(scope);
});
}
// ...
}
Now we can move on to implementing the missing load
method to dynamically inject a <script>
element:
load(tag) {
let el = document.createElement("script");
let res = new Promise((resolve, reject) => {
el.addEventListener("load", ev => {
resolve(null);
});
el.addEventListener("error", ev => {
reject(new Error("failed to locate custom-element definition"));
});
});
el.src = this.elementURL(tag);
document.head.appendChild(el);
return res;
}
elementURL(tag) {
return `${this.rootDir}/${tag}.js`;
}
Note the hard-coded convention in elementURL
. The src
attribute’s URL assumes there’s a directory where all custom element definitions reside (e.g. <my-widget>
→ /components/my-widget.js
). We could come up with more elaborate strategies, but this is good enough for our purposes. Relegating this URL to a separate method allows for project-specific subclassing when needed:
class FancyLoader extends AutoLoader {
elementURL(tag) {
// fancy logic
}
}
Either way, note that we’re relying on this.rootDir
. This is where the aforementioned configurability comes in. Let’s add a corresponding getter:
get rootDir() {
let uri = this.getAttribute("root-dir");
if(!uri) {
throw new Error("cannot auto-load custom elements: missing `root-dir`");
}
if(uri.endsWith("/")) { // remove trailing slash
return uri.substring(0, uri.length - 1);
}
return uri;
}
You might be thinking of observedAttributes
now, but that doesn’t really make things easier. Plus updating root-dir
at runtime seems like something we’re never going to need.
Now we can — and must — configure our elements directory: <ce-autoloader root-dir="/components">
.
With this, our auto-loader can do its job. Except it only works once, for elements that already exist when the auto-loader is initialized. We’ll probably want to account for dynamically added elements as well. That’s where MutationObserver
comes into play:
connectedCallback() {
let scope = this.parentNode;
defer(() => {
this.discover(scope);
});
let observer = this._observer = new MutationObserver(mutations => {
for(let { addedNodes } of mutations) {
for(let node of addedNodes) {
defer(() => {
this.discover(node);
});
}
}
});
observer.observe(scope, { subtree: true, childList: true });
}
disconnectedCallback() {
this._observer.disconnect();
}
This way, the browser notifies us whenever a new element appears in the DOM — or rather, our respective subtree — which we then use to restart the discovery process. (You might argue we’re re-inventing custom elements here, and you’d be kind of correct.)
Our auto-loader is now fully functional. Future enhancements might look into potential race conditions and investigate optimizations. But chances are this is good enough for most scenarios. Let me know in the comments if you have a different approach and we can compare notes!
An Approach to Lazy Loading Custom Elements originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]>Different Ways to Get CSS Gradient Shadows originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]>But first… another article about gradient shadows? Really?
Yes, this is yet another post on the topic, but it is different. Together, we’re going to push the limits to get a solution that covers something I haven’t seen anywhere else: transparency. Most of the tricks work if the element has a non-transparent background but what if we have a transparent background? We will explore this case here!
Before we start, let me introduce my gradient shadows generator. All you have to do is to adjust the configuration, and get the code. But follow along because I’m going to help you understand all the logic behind the generated code.
Let’s start with the solution that’ll work for 80% of most cases. The most typical case: you are using an element with a background, and you need to add a gradient shadow to it. No transparency issues to consider there.
The solution is to rely on a pseudo-element where the gradient is defined. You place it behind the actual element and apply a blur filter to it.
.box {
position: relative;
}
.box::before {
content: "";
position: absolute;
inset: -5px; /* control the spread */
transform: translate(10px, 8px); /* control the offsets */
z-index: -1; /* place the element behind */
background: /* your gradient here */;
filter: blur(10px); /* control the blur */
}
It looks like a lot of code, and that’s because it is. Here’s how we could have done it with a box-shadow
instead if we were using a solid color instead of a gradient.
box-shadow: 10px 8px 10px 5px orange;
That should give you a good idea of what the values in the first snippet are doing. We have X and Y offsets, the blur radius, and the spread distance. Note that we need a negative value for the spread distance that comes from the inset
property.
Here’s a demo showing the gradient shadow next to a classic box-shadow
:
If you look closely you will notice that both shadows are a little different, especially the blur part. It’s not a surprise because I am pretty sure the filter
property’s algorithm works differently than the one for box-shadow
. That’s not a big deal since the result is, in the end, quite similar.
This solution is good, but still has a few drawbacks related to the z-index: -1
declaration. Yes, there is “stacking context” happening there!
I applied a transform
to the main element, and boom! The shadow is no longer below the element. This is not a bug but the logical result of a stacking context. Don’t worry, I will not start a boring explanation about stacking context (I already did that in a Stack Overflow thread), but I’ll still show you how to work around it.
The first solution that I recommend is to use a 3D transform
:
.box {
position: relative;
transform-style: preserve-3d;
}
.box::before {
content: "";
position: absolute;
inset: -5px;
transform: translate3d(10px, 8px, -1px); /* (X, Y, Z) */
background: /* .. */;
filter: blur(10px);
}
Instead of using z-index: -1
, we will use a negative translation along the Z-axis. We will put everything inside translate3d()
. Don’t forget to use transform-style: preserve-3d
on the main element; otherwise, the 3D transform
won’t take effect.
As far as I know, there is no side effect to this solution… but maybe you see one. If that’s the case, share it in the comment section, and let’s try to find a fix for it!
If for some reason you are unable to use a 3D transform
, the other solution is to rely on two pseudo-elements — ::before
and ::after
. One creates the gradient shadow, and the other reproduces the main background (and other styles you might need). That way, we can easily control the stacking order of both pseudo-elements.
.box {
position: relative;
z-index: 0; /* We force a stacking context */
}
/* Creates the shadow */
.box::before {
content: "";
position: absolute;
z-index: -2;
inset: -5px;
transform: translate(10px, 8px);
background: /* .. */;
filter: blur(10px);
}
/* Reproduces the main element styles */
.box::after {
content: """;
position: absolute;
z-index: -1;
inset: 0;
/* Inherit all the decorations defined on the main element */
background: inherit;
border: inherit;
box-shadow: inherit;
}
It’s important to note that we are forcing the main element to create a stacking context by declaring z-index: 0
, or any other property that do the same, on it. Also, don’t forget that pseudo-elements consider the padding box of the main element as a reference. So, if the main element has a border, you need to take that into account when defining the pseudo-element styles. You will notice that I am using inset: -2px
on ::after
to account for the border defined on the main element.
As I said, this solution is probably good enough in a majority of cases where you want a gradient shadow, as long as you don’t need to support transparency. But we are here for the challenge and to push the limits, so even if you don’t need what is coming next, stay with me. You will probably learn new CSS tricks that you can use elsewhere.
Let’s pick up where we left off on the 3D transform
and remove the background from the main element. I will start with a shadow that has both offsets and spread distance equal to 0
.
The idea is to find a way to cut or hide everything inside the area of the element (inside the green border) while keeping what is outside. We are going to use clip-path
for that. But you might wonder how clip-path
can make a cut inside an element.
Indeed, there’s no way to do that, but we can simulate it using a particular polygon pattern:
clip-path: polygon(-100vmax -100vmax,100vmax -100vmax,100vmax 100vmax,-100vmax 100vmax,-100vmax -100vmax,0 0,0 100%,100% 100%,100% 0,0 0)
Tada! We have a gradient shadow that supports transparency. All we did is add a clip-path
to the previous code. Here is a figure to illustrate the polygon part.
The blue area is the visible part after applying the clip-path
. I am only using the blue color to illustrate the concept, but in reality, we will only see the shadow inside that area. As you can see, we have four points defined with a big value (B
). My big value is 100vmax
, but it can be any big value you want. The idea is to ensure we have enough space for the shadow. We also have four points that are the corners of the pseudo-element.
The arrows illustrate the path that defines the polygon. We start from (-B, -B)
until we reach (0,0)
. In total, we need 10 points. Not eight points because two points are repeated twice in the path ((-B,-B)
and (0,0)
).
There’s still one more thing left for us to do, and it’s to account for the spread distance and the offsets. The only reason the demo above works is because it is a particular case where the offsets and spread distance are equal to 0
.
Let’s define the spread and see what happens. Remember that we use inset
with a negative value to do this:
The pseudo-element is now bigger than the main element, so the clip-path
cuts more than we need it to. Remember, we always need to cut the part inside the main element (the area inside the green border of the example). We need to adjust the position of the four points inside of clip-path
.
.box {
--s: 10px; /* the spread */
position: relative;
}
.box::before {
inset: calc(-1 * var(--s));
clip-path: polygon(
-100vmax -100vmax,
100vmax -100vmax,
100vmax 100vmax,
-100vmax 100vmax,
-100vmax -100vmax,
calc(0px + var(--s)) calc(0px + var(--s)),
calc(0px + var(--s)) calc(100% - var(--s)),
calc(100% - var(--s)) calc(100% - var(--s)),
calc(100% - var(--s)) calc(0px + var(--s)),
calc(0px + var(--s)) calc(0px + var(--s))
);
}
We’ve defined a CSS variable, --s
, for the spread distance and updated the polygon points. I didn’t touch the points where I am using the big value. I only update the points that define the corners of the pseudo-element. I increase all the zero values by --s
and decrease the 100%
values by --s
.
It’s the same logic with the offsets. When we translate the pseudo-element, the shadow is out of alignment, and we need to rectify the polygon again and move the points in the opposite direction.
.box {
--s: 10px; /* the spread */
--x: 10px; /* X offset */
--y: 8px; /* Y offset */
position: relative;
}
.box::before {
inset: calc(-1 * var(--s));
transform: translate3d(var(--x), var(--y), -1px);
clip-path: polygon(
-100vmax -100vmax,
100vmax -100vmax,
100vmax 100vmax,
-100vmax 100vmax,
-100vmax -100vmax,
calc(0px + var(--s) - var(--x)) calc(0px + var(--s) - var(--y)),
calc(0px + var(--s) - var(--x)) calc(100% - var(--s) - var(--y)),
calc(100% - var(--s) - var(--x)) calc(100% - var(--s) - var(--y)),
calc(100% - var(--s) - var(--x)) calc(0px + var(--s) - var(--y)),
calc(0px + var(--s) - var(--x)) calc(0px + var(--s) - var(--y))
);
}
There are two more variables for the offsets: --x
and --y
. We use them inside of transform
and we also update the clip-path
values. We still don’t touch the polygon points with big values, but we offset all the others — we reduce --x
from the X coordinates, and --y
from the Y coordinates.
Now all we have to do is to update a few variables to control the gradient shadow. And while we are at it, let’s also make the blur radius a variable as well:
Do we still need the 3D
transform
trick?
It all depends on the border. Don’t forget that the reference for a pseudo-element is the padding box, so if you apply a border to your main element, you will have an overlap. You either keep the 3D transform
trick or update the inset
value to account for the border.
Here is the previous demo with an updated inset
value in place of the 3D transform
:
I‘d say this is a more suitable way to go because the spread distance will be more accurate, as it starts from the border-box instead of the padding-box. But you will need to adjust the inset
value according to the main element’s border. Sometimes, the border of the element is unknown and you have to use the previous solution.
With the earlier non-transparent solution, it’s possible you will face a stacking context issue. And with the transparent solution, it’s possible you face a border issue instead. Now you have options and ways to work around those issues. The 3D transform trick is my favorite solution because it fixes all the issues (The online generator will consider it as well)
If you try adding border-radius
to the element when using the non-transparent solution we started with, it is a fairly trivial task. All you need to do is to inherit the same value from the main element, and you are done.
Even if you don’t have a border radius, it’s a good idea to define border-radius: inherit
. That accounts for any potential border-radius
you might want to add later or a border radius that comes from somewhere else.
It’s a different story when dealing with the transparent solution. Unfortunately, it means finding another solution because clip-path
cannot deal with curvatures. That means we won’t be able to cut the area inside the main element.
We will introduce the mask
property to the mix.
This part was very tedious, and I struggled to find a general solution that doesn’t rely on magic numbers. I ended up with a very complex solution that uses only one pseudo-element, but the code was a lump of spaghetti that covers only a few particular cases. I don’t think it is worth exploring that route.
I decided to insert an extra element for the sake of simpler code. Here’s the markup:
<div class="box">
<sh></sh>
</div>
I am using a custom element, <sh>
, to avoid any potential conflict with external CSS. I could have used a <div>
, but since it’s a common element, it can easily be targeted by another CSS rule coming from somewhere else that can break our code.
The first step is to position the <sh>
element and purposely create an overflow:
.box {
--r: 50px;
position: relative;
border-radius: var(--r);
}
.box sh {
position: absolute;
inset: -150px;
border: 150px solid #0000;
border-radius: calc(150px + var(--r));
}
The code may look a bit strange, but we’ll get to the logic behind it as we go. Next, we create the gradient shadow using a pseudo-element of <sh>
.
.box {
--r: 50px;
position: relative;
border-radius: var(--r);
transform-style: preserve-3d;
}
.box sh {
position: absolute;
inset: -150px;
border: 150px solid #0000;
border-radius: calc(150px + var(--r));
transform: translateZ(-1px)
}
.box sh::before {
content: "";
position: absolute;
inset: -5px;
border-radius: var(--r);
background: /* Your gradient */;
filter: blur(10px);
transform: translate(10px,8px);
}
As you can see, the pseudo-element uses the same code as all the previous examples. The only difference is the 3D transform
defined on the <sh>
element instead of the pseudo-element. For the moment, we have a gradient shadow without the transparency feature:
Note that the area of the <sh>
element is defined with the black outline. Why I am doing this? Because that way, I am able to apply a mask
on it to hide the part inside the green area and keep the overflowing part where we need to see the shadow.
I know it’s a bit tricky, but unlike clip-path
, the mask
property doesn’t account for the area outside an element to show and hide things. That’s why I was obligated to introduce the extra element — to simulate the “outside” area.
Also, note that I am using a combination of border
and inset
to define that area. This allows me to keep the padding-box of that extra element the same as the main element so that the pseudo-element won’t need additional calculations.
Another useful thing we get from using an extra element is that the element is fixed, and only the pseudo-element is moving (using translate
). This will allow me to easily define the mask, which is the last step of this trick.
mask:
linear-gradient(#000 0 0) content-box,
linear-gradient(#000 0 0);
mask-composite: exclude;
It’s done! We have our gradient shadow, and it supports border-radius
! You probably expected a complex mask
value with oodles of gradients, but no! We only need two simple gradients and a mask-composite
to complete the magic.
Let’s isolate the <sh>
element to understand what is happening there:
.box sh {
position: absolute;
inset: -150px;
border: 150px solid red;
background: lightblue;
border-radius: calc(150px + var(--r));
}
Here’s what we get:
Note how the inner radius matches the main element’s border-radius
. I have defined a big border (150px
) and a border-radius
equal to the big border plus the main element’s radius. On the outside, I have a radius equal to 150px + R
. On the inside, I have 150px + R - 150px = R
.
We must hide the inner (blue) part and make sure the border (red) part is still visible. To do that, I’ve defined two mask layers —One that covers only the content-box area and another that covers the border-box area (the default value). Then I excluded one from another to reveal the border.
mask:
linear-gradient(#000 0 0) content-box,
linear-gradient(#000 0 0);
mask-composite: exclude;
I used the same technique to create a border that supports gradients and border-radius
. Ana Tudor has also a good article about masking composite that I invite you to read.
Are there any drawbacks to this method?
Yes, this definitely not perfect. The first issue you may face is related to using a border on the main element. This may create a small misalignment in the radii if you don’t account for it. We have this issue in our example, but perhaps you can hardly notice it.
The fix is relatively easy: Add the border’s width for the <sh>
element’s inset
.
.box {
--r: 50px;
border-radius: var(--r);
border: 2px solid;
}
.box sh {
position: absolute;
inset: -152px; /* 150px + 2px */
border: 150px solid #0000;
border-radius: calc(150px + var(--r));
}
Another drawback is the big value we’re using for the border (150px
in the example). This value should be big enough to contain the shadow but not too big to avoid overflow and scrollbar issues. Luckily, the online generator will calculate the optimal value considering all the parameters.
The last drawback I am aware of is when you’re working with a complex border-radius
. For example, if you want a different radius applied to each corner, you must define a variable for each side. It’s not really a drawback, I suppose, but it can make your code a bit tougher to maintain.
.box {
--r-top: 10px;
--r-right: 40px;
--r-bottom: 30px;
--r-left: 20px;
border-radius: var(--r-top) var(--r-right) var(--r-bottom) var(--r-left);
}
.box sh {
border-radius: calc(150px + var(--r-top)) calc(150px + var(--r-right)) calc(150px + var(--r-bottom)) calc(150px + var(--r-left));
}
.box sh:before {
border-radius: var(--r-top) var(--r-right) var(--r-bottom) var(--r-left);
}
The online generator only considers a uniform radius for the sake of simplicity, but you now know how to modify the code if you want to consider a complex radius configuration.
We’ve reached the end! The magic behind gradient shadows is no longer a mystery. I tried to cover all the possibilities and any possible issues you might face. If I missed something or you discover any issue, please feel free to report it in the comment section, and I’ll check it out.
Again, a lot of this is likely overkill considering that the de facto solution will cover most of your use cases. Nevertheless, it’s good to know the “why” and “how” behind the trick, and how to overcome its limitations. Plus, we got good exercise playing with CSS clipping and masking.
And, of course, you have the online generator you can reach for anytime you want to avoid the hassle.
Different Ways to Get CSS Gradient Shadows originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]>Healthcare, Selling Lemons, and the Price of Developer Experience originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]>This is where the story begins. Eric goes to a health service provider website to book an appointment and gets… a blank screen.
In addition to a terrifying amount of telemetry, Modern Health’s customer-facing experience is delivered using React and Webpack.
If you are familiar with how the web is built, what happened is pretty obvious: A website that over-relies on JavaScript to power its experience had its logic collide with one or more other errant pieces of logic that it summons. This created a deadlock.
If you do not make digital experiences for a living, what happened is not obvious at all. All you see is a tiny fake loading spinner that never stops.
D’oh. This might be mere nuisance — or even laughable — in some situations, but not when someone’s health is on the line:
A person seeking help in a time of crisis does not care about TypeScript, tree shaking, hot module replacement, A/B tests, burndown charts, NPS, OKRs, KPIs, or other startup jargon. Developer experience does not count for shit if the person using the thing they built can’t actually get what they need.
This is the big smack of reality. What happens when our tooling and reporting — the very things that are supposed to make our work more effective — get in the way of the user experience? These are tools that provide insights that can help us anticipate a user’s needs, especially in a time of need.
I realize that pointing the finger at JavaScript frameworks is already divisive. But this goes beyond whether you use React or framework d’jour. It’s about business priorities and developer experience conflicting with user experiences.
Partisans for slow, complex frameworks have successfully marketed lemons as the hot new thing, despite the pervasive failures in their wake, crowding out higher-quality options in the process.
These technologies were initially pitched on the back of “better user experiences”, but have utterly failed to deliver on that promise outside of the high-management-maturity organisations in which they were born. Transplanted into the wider web, these new stacks have proven to be expensive duds.
There’s the rub. Alex ain’t mincing words, but notice that the onus is on the way frameworks haved been marketed to developers than developers themselves. The sales pitch?
Once the lemon sellers embed the data-light idea that improved “Developer Experience” (“DX”) leads to better user outcomes, improving “DX” became and end unto itself, and many who knew better felt forced to play along. The long lead times in falsifying trickle-down UX was a feature, not a bug; they don’t need you to succeed, only to keep buying.
As marketing goes, the “DX” bait-and-switch is brilliant, but the tech isn’t delivering for anyone but developers.
Tough to stomach, right? No one wants to be duped, and it’s tough to admit a sunken cost when there is one. It gets downright personal if you’ve invested time in a specific piece of tech and effort integrating it into your stack. Development workflows are hard and settling into one is sorta like settling into a house you plan on living in a little while. But you’d want to know if your house was built on what Alex calls a “sandy foundation”.
I’d just like to pause here a moment to say I have no skin in this debate. As a web generalist, I tend to adopt new tools early for familiarity then drop them fast, relegating them to my toolshed until I find a good use for them. In other words, my knowledge is wide but not very deep in one area or thing. HTML, CSS, and JavaScript is my go-to cocktail, but I do care a great deal about user experience and know when to reach for a tool to solve a particular thing.
And let’s acknowledge that not everyone has a say in the matter. Many of us work on managed teams that are prescribed the tools we use. Alex says as much, which I think is important to call out because it’s clear this isn’t meant to be personal. It’s a statement on our priorities and making sure they along to user expectations.
Let’s alow Chris to steer us back to the story…
So, maybe your app is built on React and it doesn’t matter why it’s that way. There’s still work to do to ensure the app is reliable and accessible.
Just blocking a file shouldn’t totally wreck a website, but it often does! In JavaScript, that may be because the developers have written first-party JavaScript (which I’ll generally allow) that depends on third-party JavaScript (which I’ll generally block).
[…]
If I block resources from
tracking-website.com
, now my first-party JavaScript is going to throw an error. JavaScript isn’t chill. If an error is thrown, it doesn’t execute more JavaScript further down in the file. If further down in that file istransitionToOnboarding();
— that ain’t gonna work.
Maybe it’s worth revisiting your workflow and tweaking it to account to identify more points of failure.
So here’s an idea: Run your end-to-end tests in browsers that have popular content blockers with default configs installed.
Doing so may uncover problems like this that stop your customers, and indeed people in need, from being stopped in their tracks.
Good idea! Hey, anything that helps paint a more realistic picture of how the app is used. That sort of clarity could happen a lot earlier in the process, perhaps before settling on development decisions. Know your users. Why are they using the app? How do they browse the web? Where are they phsically located? What problems could get in their way? Chris has a great talk on that, too.
Healthcare, Selling Lemons, and the Price of Developer Experience originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]><img>
anyway, accessibility and whatnot.
But there are …
Moving Backgrounds originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]><img>
anyway, accessibility and whatnot.
But there are times when the position or scale of a background image might sit somewhere between the poles of content and decoration. Context is king, right? If we change the background image’s position, it may convey a bit more context or experience.
How so? Let’s look at a few examples I’ve seen floating around.
As we get started, I’ll caution that there’s a fine line in these demos between images used for decoration and images used as content. The difference has accessibility implications where backgrounds are not announced to screen readers. If your image is really an image, then maybe consider an <img>
tag with proper alt
text. And while we’re talking accessibility, it’s a good idea to consider a user’s motion preference’s as well.
Chris Coyier has this neat little demo from several years back.
The demo is super practical in lots of ways because it’s a neat approach for displaying ads in content. You have the sales pitch and an enticing image to supplement it.
The big limitation for most ads, I’d wager, is the limited real estate. I don’t know if you’ve ever had to drop an ad onto a page, but I have and typically ask the advertiser for an image that meets exact pixel dimensions, so the asset fits the space.
But Chris’s demo alleviates the space issue. Hover the image and watch it both move and scale. The user actually gets more context for the product than they would have when the image was in its original position. That’s a win-win, right? The advertiser gets to create an eye-catching image without compromising context. Meanwhile, the user gets a little extra value from the newly revealed portions of the image.
If you peek at the demo’s markup, you’ll notice it’s pretty much what you’d expect. Here’s an abridged version:
<div class="ad-container">
<a href="#" target="_blank" rel="noopener">
<!-- Background image container -->
<div class="ad-image"></div>
</a>
<div class="ad-content">
<!-- Content -->
</div>
</div>
We could probably quibble over the semantics a bit, but that’s not the point. We have a container with a linked-up <div>
for the background image and another <div>
to hold the content.
As far as styling goes, the important pieces are here:
.container {
background-image: url("/path/to/some/image.png");
background-repeat: no-repeat;
background-position: 0 0;
height: 400px;
width: 350px;
}
Not bad, right? We give the container some dimensions and set a background image on it that doesn’t repeat and is positioned by its bottom-left edge.
The real trick is with JavaScript. We will use that to get the mouse position and the container’s offset, then convert that value to an appropriate scale to set the background-position
. First, let’s listen for mouse movements on the .container
element:
let container = document.querySelector(".container");
container.addEventListener("mousemove", function(e) {
// Our function
}
);
From here, we can use the container’s offsetX
and offsetY
properties. But we won’t use these values directly, as the value for the X coordinate is smaller than what we need, and the Y coordinate is larger. We will have to play around a bit to find a constant that we can use as a multiplier.
It’s a bit touch-and-feel, but I’ve found that 1.32
and 0.455
work perfectly for the X and Y coordinates, respectively. We multiply the offsets by those values, append a px
unit on the result, then apply it to the background-position
values.
let container = document.querySelector(".container");
container.addEventListener("mousemove", function(e) {
container.style.backgroundPositionX = -e.offsetX * 1.32 + "px";
container.style.backgroundPositionY = -e.offsetY * 0.455 + "px";
}
);
Lastly, we can also reset the background positions back to the original if the user leaves the image container.
container.addEventListener("mouseleave", function() {
container.style.backgroundPosition = "0px 0px";
}
);
Since we’re on CSS-Tricks, I’ll offer that we could have done a much cheaper version of this with a little hover transition in vanilla CSS:
No doubt you’ve been to some online clothing store or whatever and encountered the ol’ zoom-on-hover feature.
This pattern has been around for what feels like forever (Dylan Winn-Brown shared his approach back in 2016), but that’s just a testament (I hope) to its usefulness. The user gets more context as they zoom in and get a better idea of a sweater’s stitching or what have you.
There’s two pieces to this: the container and the magnifier. The container is the only thing we need in the markup, as we’ll inject the magnifier element during the user’s interaction. So, behold our HTML!
<div class="container"></div>
In the CSS, we will create width
and height
variables to store the dimensions of the of the magnifier glass itself. Then we’ll give that .container
some shape and a background-image
:
:root {
--magnifer-width: 85;
--magnifer-height: 85;
}
.container {
width: 500px;
height: 400px;
background-size: cover;
background-image: url("/path/to/image.png");
background-repeat: no-repeat;
position: relative;
}
There are some things we already know about the magnifier before we even see it, and we can define those styles up-front, specifically the previously defined variables for the .maginifier
‘s width
and height
:
.magnifier {
position: absolute;
width: calc(var(--magnifer-width) * 1px);
height: calc(var(--magnifer-height) * 1px);
border: 3px solid #000;
cursor: none;
background-image: url("/path/to/image.png");
background-repeat: no-repeat;
}
It’s an absolutely-positioned little square that uses the same background image file as the .container
. Do note that the calc function is solely used here to convert the unit-less value in the variable to pixels. Feel free to arrange that however you see fit as far as eliminating repetition in your code.
Now, let’s turn to the JavaScript that pulls this all together. First we need to access the CSS variable defined earlier. We will use this in multiple places later on. Then we need get the mouse position within the container because that’s the value we’ll use for the the magnifier’s background position.
// Get the css variables
let root = window.getComputedStyle(document.documentElement);
let magnifier_width = root.getPropertyValue("--magnifer-width");
let magnifier_height = root.getPropertyValue("--magnifer-height");
let container = document.querySelector(".container");
let rect = container.getBoundingClientRect();
let x = (e.pageX - rect.left);
let y = (e.pageY - rect.top);
// Take page scrolling into account
x = x - window.pageXOffset;
y = y - window.pageYOffset;
What we need is basically a mousemove
event listener on the .container
. Then, we will use the event.pageX
or event.pageY
property to get the X or Y coordinate of the mouse. But to get the exact relative position of the mouse on an element, we need to subtract the position of the parent element from the mouse position we get from the JavaScript above. A “simple” way to do this is to use getBoundingClientRect()
, which returns the size of an element and its position relative to the viewport.
Notice how I’m taking scrolling into account. If there is overflow, subtracting the window pageX
and pageY
offsets will ensure the effect runs as expected.
We will first create the magnifier div. Next, we will create a mousemove
function and add it to the image container. In this function, we will give the magnifier a class attribute. We will also calculate the mouse position and give the magnifier the left and top values we calculated earlier.
Let’s go ahead and build the magnifier
when we hear a mousemove
event on the .container
:
// create the magnifier
let magnifier = document.createElement("div");
container.append(magnifier);
Now we need to make sure it has a class name we can scope to the CSS:
// run the function on `mousemove`
container.addEventListener("mousemove", (e) => {
magnifier.setAttribute("class", "magnifier");
}
The example video I showed earlier positions the magnifier outside of the container. We’re gonna keep this simple and overlay it on top of the container instead as the mouse moves. We will use if
statements to set the magnifier’s position only if the X and Y values are greater or equal to zero, and less than the container’s width or height. That should keep it in bounds. Just be sure to subtract the width and height of the magnifier from the X and Y values.
// Run the function on mouse move.
container.addEventListener("mousemove", (e) => {
magnifier.setAttribute("class", "magnifier");
// Get mouse position
let rect = container.getBoundingClientRect();
let x = (e.pageX - rect.left);
let y = (e.pageY - rect.top);
// Take page scrolling into account
x = x - window.pageXOffset;
y = y - window.pageYOffset;
// Prevent magnifier from exiting the container
// Then set top and left values of magnifier
if (x >= 0 && x <= container.clientWidth - magnifier_width) {
magnifier.style.left = x + "px";
}
if (y >= 0 && y <= container.clientHeight - magnifier_height) {
magnifier.style.top = y + "px";
}
});
Last, but certainly not least… we need to play with the magnifier’s background image a bit. The whole point is that the user gets a BIGGER view of the background image based on where the hover is taking place. So, let’s define a magnifier we can use to scale things up. Then we’ll define variables for the background image’s width and height so we have something to base that scale on, and set all of those values on the .magnifier
styles:
// Magnifier image configurations
let magnify = 2;
let imgWidth = 500;
let imgHeight = 400;
magnifier.style.backgroundSize = imgWidth * magnify + "px " + imgHeight * magnify + "px";
Let’s take the X and Y coordinates of the magnifier’s image and apply them to the .magnifier
element’s background-position
. As before with the magnifier position, we need to subtract the width and height of the magnifier from the X and Y values using the CSS variables.
// the x and y positions of the magnifier image
let magnify_x = x * magnify + 15;
let magnify_y = y * magnify + 15;
// set backgroundPosition for magnifier if it is within image
if (
x <= container.clientWidth - magnifier_width &&
y <= container.clientHeight - magnifier_height
) {
magnifier.style.backgroundPosition = -magnify_x + "px " + -magnify_y + "px";
}
Tada!
Have you seen the Ken Burns effect? It’s classic and timeless thing where an image is bigger than the container it’s in, then sorta slides and scales slow as a slug. Just about every documentary film in the world seems to use it for image stills. If you have an Apple TV, then you’ve certainly seen it on the screen saver.
There are plenty of examples over at CodePen if you wanna get a better idea.
You’ll see that there are a number of ways to approach this. Some use JavaScript. Others are 100% CSS. I’m sure the JavaScript approaches are good for some uses cases, but if the goal is simply to subtly scale the image, CSS is perfectly suitable.
We could spice things up a bit using multiple backgrounds rather than one. Or, better yet, if we expand the rules to use elements instead of background images, we can apply the same animation to all of the backgrounds and use a dash of animation-delay
to stagger the effect.
Lots of ways to do this, of course! It can certainly be optimized with Sass and/or CSS variables. Heck, maybe you can pull it off with a single <div>
If so, share it in the comments!
I don’t know if anything is cooler than Sarah Drasner’s “Happy Halloween” pen… and that’s from 2016! It is a great example that layers backgrounds and moves them at varying speeds to create an almost cinematic experience. Good gosh is that cool!
GSAP is the main driver there, but I imagine we could make a boiled-down version that simply translates each background layer from left to right at different speeds. Not as cool, of course, but certainly the baseline experience. Gotta make sure the start and end of each background image is consistent so it repeats seamlessly when the animation repeats.
That’s it for now! Pretty neat that we can use backgrounds for much more than texture and contrast. I’m absolutely positive there are tons of other clever interactions we can apply to backgrounds. Temani Afif did exactly that with a bunch of neat hover effects for links. What do you have in mind? Share it with me in the comments!
Moving Backgrounds originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]>The truth about CSS selector performance originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]>But if you’re looking for gains on the CSS side of things, Patrick has a nice way of sniffing out your most expensive selectors using Edge DevTools:
From here, click on one of the Recalculated Style events in the Main waterfall view and you’ll get a new “Selector Stats” tab. Look at all that gooey goodness!
Now you see all of the selectors that were processed and they can be sorted by how long they took, how many times they matched, the number of matching attempts, and something called “fast reject count” which I learned is the number of elements that were easy and quick to eliminate from matching.
A lot of insights here if CSS is really the bottleneck that needs investigating. But read Patrick’s full post over on the Microsoft Edge Blog because he goes much deeper into the why’s and how’s, and walks through an entire case study.
To Shared Link — Permalink on CSS-Tricks
The truth about CSS selector performance originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]><pI used to have this
…
The Double Emphasis Thing originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]><p>
I used to have this boss who <em>loved</em>, <strong>loved</strong>,
<strong><em>loved</em></strong>, <strong><em><u>loved</u></em></strong>
to emphasize words.
</p>
(Let’s not go into the colors he used for even MOAR emphasis.)
Writing all that markup never felt great. The effort it took, sure, whatever. But is it even a good idea to add overload content with double — or more! — emphases?
For starters, the <strong>
and <em>
tags are designed for different uses. We got them back in HTML5, where:
<strong>
: Is used to convey “strong importance, seriousness, or urgency for its contents”.<em>
: Represents “stress emphasis”.So, <strong>
gives the content more weight in the sense it suggests that the content in it is important or urgent. Think of a warning:
Warning: The following content has been flagged for being awesome.
It might be tempting to reach for <em>
to do the same thing. Italicized text can be attention-grabbing after all. But it’s really meant as a hint to use more emphasis when readingt the content in it. For example, here are two versions of the same sentence with the emphasis in different locations:
<p>I ate the <em>entire</em> plate of burritos.</p>
<p>I ate the entire <em>plate</em> of burritos.</p>
Both examples stress emphasis, but on different words. And they would sound different if you were to read them out loud. That makes <em>
a great way to express tone in your writing. It changes the meaning of the sentence in a way that <strong>
does not.
Those are two things you gotta weigh when emphasizing content. Like, there are plenty of instances where you may need to italicize content without affecting the meaning of the sentence. But those can be handled with other tags that render italics:
<i>
: This is the classic one! Before HTML5, this was used to stress emphasis with italics all over the place. Now, it’s purely used to italicize content visually without changing the semantic meaning.<cite>
: Indicating the source of a fact or figure. (“Source: CSS-Tricks“)<address>
: Used to mark up contact information, not only physical addresses, but things like email addresses and phone numbers too. (howdy@example.com)It’s going to he the same thing with <strong>
. Rather than using it for styling text you want to look heavier, it’s a better idea to use the classic <b>
tag for boldfacing to avoid giving extra signficance to content that doesn’t need it. And remember, some elements like headings are already rendered in bold, thanks to the browser’s default styles. There’s no need to add even more strong emphasis.
There are legitimate cases where you may need to italicize part of a line that’s already emphasized. Or maybe add emphasis to a bit of text that’s already italicized.
A blockquote might be a good example. I’ve seen plenty of times where they are italicized for style, even though default browser styles don’t do it:
blockquote {
font-style: italic;
}
What if we need to mention a movie title in that blockquote? That should be italicized. There’s no stress emphasis needed, so an <i>
tag will do. But it’s still weird to italicize something when it’s already rendered that way:
<blockquote>
This movie’s opening weekend performance offers some insight in
to its box office momentum as it fights to justify its enormous
budget. In its first weekend, <i>Avatar: The Way of Water</i> made
$134 million in North America alone and $435 million globally.
</blockquote>
In a situation where we’re italicizing something within italicized content like this, we’re supposed to remove the italics from the nested element… <i>
in this case.
blockquote i {
font-style: normal;
}
Container style queries will be super useful to nab all these instances if we get them:
blockquote {
container-name: quote;
font-style: italic;
}
@container quote (font-style: italic) {
em, i, cite, address {
font-style: normal;
}
}
This little snippet evaluates the blockquote to see if it’s font-style
is set to italic
. If it is, then it’ll make sure the <em>
, <i>
, <cite>
, and <address>
elements are rendered as normal text, while retaining the semantic meaning if there is one.
I wouldn’t nest <strong>
inside <em>
like this:
<p>I ate the <em><strong>entire</strong></em> plate of burritos.</p>
…or nest <em>
inside <strong>
instead:
<p>I ate the <em><strong>entire</strong></em> plate of burritos.</p>
The rendering is fine! And it doesn’t matter what order they’re in… at least in modern browsers. Jennifer Kyrnin mentions that some browsers only render the tag nearest to the text, but I didn’t bump into that anywhere in my limited tests. But something to watch for!
The reason I wouldn’t nest one form of emphasis in another is because it simply isn’t needed. There is no grammar rule that calls for it. Like exclamation points, one form of emphasis is enough, and you ought to use the one that matches what you’re after whether it’s visual, weight, or announced emphasis.
And even though some screen readers are capable of announcing emphasized content, they won’t read the markup with any additional importance or emphasis. So, no additional accessibility perks either, as far as I can tell.
If you’re in the position where your boss is like mine and wants ALL the emphasis, I’d reach for the right HTML tag for the type of emphasis, then apply the rest of the styles with a mix of tags that don’t affect semantics with CSS to help account for anything browser styles won’t handle.
<style>
/* If `em` contains `b` or `u` tags */
em:has(b, u) {
color: #f8a100;
}
</style>
<p>
I used to have this boss who <em>loved</em>, <strong>loved</strong>,
<strong><em>loved</em></strong>, <strong><em><u>loved</u></em></strong>
to emphasize words.
</p>
I might even do it with the <strong>
tag too as a defensive measure:
/* If `em` contains `b` or `u` tags */
em:has(b, u),
/* If `strong` contains `em` or `u` tags */
strong:has(i, u) {
color: #f8a100;
}
As long as we’re playing defense, we can identify errors where emphases are nested within emphases by highlighting them in red or something:
/* Highlight semantic emphases within semantic emphases */
em:has(strong),
strong:has(em) {
background: hsl(0deg 50% 50% / .25);
border: 1px dashed hsl(0deg 50% 50% / .25);
}
Then I’d probably use that snippet from the last section that removes the default italic styling from an element when it is nested in another italiczed element.
Mayyyyybe:
<strong>
in <em>
or the other way around, there may be a browser or several that will.The Double Emphasis Thing originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]>A Fancy Hover Effect For Your Avatar originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]>I have a similar idea but tackled a different way and with a sprinkle of animation. I think it’s pretty practical and makes for a neat hover effect you can use on something like your own avatar.
See that? We’re going to make a scaling animation where the avatar seems to pop right out of the circle it’s in. Cool, right? Don’t look at the code and let’s build this animation together step-by-step.
If you haven’t checked the code of the demo and you are wondering how many div
s this’ll take, then stop right there, because our markup is nothing but a single image element:
<img src="" alt="">
Yes, a single element! The challenging part of this exercise is using the smallest amount of code possible. If you have been following me for a while, you should be used to this. I try hard to find CSS solutions that can be achieved with the smallest, most maintainable code possible.
I wrote a series of articles here on CSS-Tricks where I explore different hover effects using the same HTML markup containing a single element. I go into detail on gradients, masking, clipping, outlines, and even layout techniques. I highly recommend checking those out because I will re-use many of the tricks in this post.
An image file that’s square with a transparent background will work best for what we’re doing. Here’s the one I’m using if you want start with that.
I’m hoping to see lots of examples of this as possible using real images — so please share your final result in the comments when you’re done so we can build a collection!
Before jumping into CSS, let’s first dissect the effect. The image gets bigger on hover, so we’ll for sure use transform: scale()
in there. There’s a circle behind the avatar, and a radial gradient should do the trick. Finally, we need a way to create a border at the bottom of the circle that creates the appearance of the avatar behind the circle.
Let’s get to work!
Let’s start by adding the transform:
img {
width: 280px;
aspect-ratio: 1;
cursor: pointer;
transition: .5s;
}
img:hover {
transform: scale(1.35);
}
Nothing complicated yet, right? Let’s move on.
We said that the background would be a radial gradient. That’s perfect because we can create hard stops between the colors of a radial gradient, which make it look like we’re drawing a circle with solid lines.
img {
--b: 5px; /* border width */
width: 280px;
aspect-ratio: 1;
background:
radial-gradient(
circle closest-side,
#ECD078 calc(99% - var(--b)),
#C02942 calc(100% - var(--b)) 99%,
#0000
);
cursor: pointer;
transition: .5s;
}
img:hover {
transform: scale(1.35);
}
Note the CSS variable, --b
, I’m using there. It represents the thickness of the “border” which is really just being used to define the hard color stops for the red part of the radial gradient.
The next step is to play with the gradient size on hover. The circle needs to keep its size as the image grows. Since we are applying a scale()
transformation, we actually need to decrease the size of the circle because it otherwise scales up with the avatar. So, while the image scales up, we need the gradient to scale down.
Let’s start by defining a CSS variable, --f
, that defines the “scale factor”, and use it to set the size of the circle. I’m using 1
as the default value, as in that’s the initial scale for the image and the circle that we transform from.
Here is a demo to illustrate the trick. Hover to see what is happening behind the scenes:
I added a third color to the radial-gradient
to better identify the area of the gradient on hover:
radial-gradient(
circle closest-side,
#ECD078 calc(99% - var(--b)),
#C02942 calc(100% - var(--b)) 99%,
lightblue
);
Now we have to position our background at the center of the circle and make sure it takes up the full height. I like to declare everything directly on the background
shorthand property, so we can add our background positioning and make sure it doesn’t repeat by tacking on those values right after the radial-gradient()
:
background: radial-gradient() 50% / calc(100% / var(--f)) 100% no-repeat;
The background is placed at the center (50%
), has a width equal to calc(100%/var(--f))
, and has a height equal to 100%
.
Nothing scales when --f
is equal to 1
— again, our initial scale. Meanwhile, the gradient takes up the full width of the container. When we increase --f
, the element’s size grows — thanks to the scale()
transform — and the gradient’s size decreases.
Here’s what we get when we apply all of this to our demo:
We’re getting closer! We have the overflow effect at the top, but we still need to hide the bottom part of the image, so it looks like it is popping out of the circle rather than sitting in front of it. That’s the tricky part of this whole thing and is what we’re going to do next.
I first tried tackling this with the border-bottom
property, but I was unable to find a way to match the size of the border to the size to the circle. Here’s the best I could get and you can immediately see it’s wrong:
The actual solution is to use the outline
property. Yes, outline
, not border
. In a previous article, I show how outline
is powerful and allows us to create cool hover effects. Combined with outline-offset
, we have exactly what we need for our effect.
The idea is to set an outline
on the image and adjust its offset to create the bottom border. The offset will depend on the scaling factor the same way the gradient size did.
Now we have our bottom “border” (actually an outline
) combined with the “border” created by the gradient to create a full circle. We still need to hide portions of the outline
(from the top and the sides), which we’ll get to in a moment.
Here’s our code so far, including a couple more CSS variables you can use to configure the image size (--s
) and the “border” color (--c
):
img {
--s: 280px; /* image size */
--b: 5px; /* border thickness */
--c: #C02942; /* border color */
--f: 1; /* initial scale */
width: var(--s);
aspect-ratio: 1;
cursor: pointer;
border-radius: 0 0 999px 999px;
outline: var(--b) solid var(--c);
outline-offset: calc((1 / var(--f) - 1) * var(--s) / 2 - var(--b));
background:
radial-gradient(
circle closest-side,
#ECD078 calc(99% - var(--b)),
var(--c) calc(100% - var(--b)) 99%,
#0000
) 50% / calc(100% / var(--f)) 100% no-repeat;
transform: scale(var(--f));
transition: .5s;
}
img:hover {
--f: 1.35; /* hover scale */
}
Since we need a circular bottom border, we added a border-radius
on the bottom side, allowing the outline
to match the curvature of the gradient.
The calculation used on outline-offset
is a lot more straightforward than it looks. By default, outline
is drawn outside of the element’s box. And in our case, we need it to overlap the element. More precisely, we need it to follow the circle created by the gradient.
When we scale the element, we see the space between the circle and the edge. Let’s not forget that the idea is to keep the circle at the same size after the scale transformation runs, which leaves us with the space we will use to define the outline’s offset as illustrated in the above figure.
Let’s not forget that the second element is scaled, so our result is also scaled… which means we need to divide the result by f
to get the real offset value:
Offset = ((f - 1) * S/2) / f = (1 - 1/f) * S/2
We add a negative sign since we need the outline to go from the outside to the inside:
Offset = (1/f - 1) * S/2
Here’s a quick demo that shows how the outline follows the gradient:
You may already see it, but we still need the bottom outline to overlap the circle rather than letting it bleed through it. We can do that by removing the border’s size from the offset:
outline-offset: calc((1 / var(--f) - 1) * var(--s) / 2) - var(--b));
Now we need to find how to remove the top part from the outline. In other words, we only want the bottom part of the image’s outline
.
First, let’s add space at the top with padding to help avoid the overlap at the top:
img {
--s: 280px; /* image size */
--b: 5px; /* border thickness */
--c: #C02942; /* border color */
--f: 1; /* initial scale */
width: var(--s);
aspect-ratio: 1;
padding-block-start: calc(var(--s)/5);
/* etc. */
}
img:hover {
--f: 1.35; /* hover scale */
}
There is no particular logic to that top padding. The idea is to ensure the outline doesn’t touch the avatar’s head. I used the element’s size to define that space to always have the same proportion.
Note that I have added the content-box
value to the background
:
background:
radial-gradient(
circle closest-side,
#ECD078 calc(99% - var(--b)),
var(--c) calc(100% - var(--b)) 99%,
#0000
) 50%/calc(100%/var(--f)) 100% no-repeat content-box;
We need this because we added padding and we only want the background set to the content box, so we must explicitly tell the background to stop there.
We reached the last part! All we need to do is to hide some pieces, and we are done. For this, we will rely on the mask
property and, of course, gradients.
Here is a figure to illustrate what we need to hide or what we need to show to be more accurate
The left image is what we currently have, and the right is what we want. The green part illustrates the mask we must apply to the original image to get the final result.
We can identify two parts of our mask:
Here’s our final CSS:
img {
--s: 280px; /* image size */
--b: 5px; /* border thickness */
--c: #C02942; /* border color */
--f: 1; /* initial scale */
--_g: 50% / calc(100% / var(--f)) 100% no-repeat content-box;
--_o: calc((1 / var(--f) - 1) * var(--s) / 2 - var(--b));
width: var(--s);
aspect-ratio: 1;
padding-top: calc(var(--s)/5);
cursor: pointer;
border-radius: 0 0 999px 999px;
outline: var(--b) solid var(--c);
outline-offset: var(--_o);
background:
radial-gradient(
circle closest-side,
#ECD078 calc(99% - var(--b)),
var(--c) calc(100% - var(--b)) 99%,
#0000) var(--_g);
mask:
linear-gradient(#000 0 0) no-repeat
50% calc(-1 * var(--_o)) / calc(100% / var(--f) - 2 * var(--b)) 50%,
radial-gradient(
circle closest-side,
#000 99%,
#0000) var(--_g);
transform: scale(var(--f));
transition: .5s;
}
img:hover {
--f: 1.35; /* hover scale */
}
Let’s break down that mask
property. For starters, notice that a similar radial-gradient()
from the background
property is in there. I created a new variable, --_g
, for the common parts to make things less cluttered.
--_g: 50% / calc(100% / var(--f)) 100% no-repeat content-box;
mask:
radial-gradient(
circle closest-side,
#000 99%,
#0000) var(--_g);
Next, there’s a linear-gradient()
in there as well:
--_g: 50% / calc(100% / var(--f)) 100% no-repeat content-box;
mask:
linear-gradient(#000 0 0) no-repeat
50% calc(-1 * var(--_o)) / calc(100% / var(--f) - 2 * var(--b)) 50%,
radial-gradient(
circle closest-side,
#000 99%,
#0000) var(--_g);
This creates the rectangle part of the mask. Its width is equal to the radial gradient’s width minus twice the border thickness:
calc(100% / var(--f) - 2 * var(--b))
The rectangle’s height is equal to half, 50%
, of the element’s size.
We also need the linear gradient placed at the horizontal center (50%
) and offset from the top by the same value as the outline’s offset. I created another CSS variable, --_o
, for the offset we previously defined:
--_o: calc((1 / var(--f) - 1) * var(--s) / 2 - var(--b));
One of the confusing things here is that we need a negative offset for the outline (to move it from outside to inside) but a positive offset for the gradient (to move from top to bottom). So, if you’re wondering why we multiply the offset, --_o
, by -1
, well, now you know!
Here is a demo to illustrate the mask’s gradient configuration:
Hover the above and see how everything move together. The middle box illustrates the mask layer composed of two gradients. Imagine it as the visible part of the left image, and you get the final result on the right!
Oof, we’re done! And not only did we wind up with a slick hover animation, but we did it all with a single HTML <img>
element. Just that and less than 20 lines of CSS trickery!
Sure, we relied on some little tricks and math formulas to reach such a complex effect. But we knew exactly what to do since we identified the pieces we needed up-front.
Could we have simplified the CSS if we allowed ourselves more HTML? Absolutely. But we’re here to learn new CSS tricks! This was a good exercise to explore CSS gradients, masking, the outline
property’s behavior, transformations, and a whole bunch more. If you felt lost at any point, then definitely check out my series that uses the same general concepts. It sometimes helps to see more examples and use cases to drive a point home.
I will leave you with one last demo that uses photos of popular CSS developers. Don’t forget to show me a demo with your own image so I can add it to the collection!
A Fancy Hover Effect For Your Avatar originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]>Caching Data in SvelteKit originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]>This post is all about data handling. We’ll add some rudimentary search functionality that will modify the page’s query string (using built-in SvelteKit features), and re-trigger the page’s loader. But, rather than just re-query our (imaginary) database, we’ll add some caching so re-searching prior searches (or using the back button) will show previously retrieved data, quickly, from cache. We’ll look at how to control the length of time the cached data stays valid and, more importantly, how to manually invalidate all cached values. As icing on the cake, we’ll look at how we can manually update the data on the current screen, client-side, after a mutation, while still purging the cache.
This will be a longer, more difficult post than most of what I usually write since we’re covering harder topics. This post will essentially show you how to implement common features of popular data utilities like react-query; but instead of pulling in an external library, we’ll only be using the web platform and SvelteKit features.
Unfortunately, the web platform’s features are a bit lower level, so we’ll be doing a bit more work than you might be used to. The upside is we won’t need any external libraries, which will help keep bundle sizes nice and small. Please don’t use the approaches I’m going to show you unless you have a good reason to. Caching is easy to get wrong, and as you’ll see, there’s a bit of complexity that’ll result in your application code. Hopefully your data store is fast, and your UI is fine allowing SvelteKit to just always request the data it needs for any given page. If it is, leave it alone. Enjoy the simplicity. But this post will show you some tricks for when that stops being the case.
Speaking of react-query, it was just released for Svelte! So if you find yourself leaning on manual caching techniques a lot, be sure to check that project out, and see if it might help.
Before we start, let’s make a few small changes to the code we had before. This will give us an excuse to see some other SvelteKit features and, more importantly, set us up for success.
First, let’s move our data loading from our loader in +page.server.js
to an API route. We’ll create a +server.js
file in routes/api/todos
, and then add a GET
function. This means we’ll now be able to fetch (using the default GET verb) to the /api/todos
path. We’ll add the same data loading code as before.
import { json } from "@sveltejs/kit";
import { getTodos } from "$lib/data/todoData";
export async function GET({ url, setHeaders, request }) {
const search = url.searchParams.get("search") || "";
const todos = await getTodos(search);
return json(todos);
}
Next, let’s take the page loader we had, and simply rename the file from +page.server.js
to +page.js
(or .ts
if you’ve scaffolded your project to use TypeScript). This changes our loader to be a “universal” loader rather than a server loader. The SvelteKit docs explain the difference, but a universal loader runs on both the server and also the client. One advantage for us is that the fetch
call into our new endpoint will run right from our browser (after the initial load), using the browser’s native fetch
function. We’ll add standard HTTP caching in a bit, but for now, all we’ll do is call the endpoint.
export async function load({ fetch, url, setHeaders }) {
const search = url.searchParams.get("search") || "";
const resp = await fetch(`/api/todos?search=${encodeURIComponent(search)}`);
const todos = await resp.json();
return {
todos,
};
}
Now let’s add a simple form to our /list
page:
<div class="search-form">
<form action="/list">
<label>Search</label>
<input autofocus name="search" />
</form>
</div>
Yep, forms can target directly to our normal page loaders. Now we can add a search term in the search box, hit Enter, and a “search” term will be appended to the URL’s query string, which will re-run our loader and search our to-do items.
Let’s also increase the delay in our todoData.js
file in /lib/data
. This will make it easy to see when data are and are not cached as we work through this post.
export const wait = async amount => new Promise(res => setTimeout(res, amount ?? 500));
Remember, the full code for this post is all on GitHub, if you need to reference it.
Let’s get started by adding some caching to our /api/todos
endpoint. We’ll go back to our +server.js
file and add our first cache-control header.
setHeaders({
"cache-control": "max-age=60",
});
…which will leave the whole function looking like this:
export async function GET({ url, setHeaders, request }) {
const search = url.searchParams.get("search") || "";
setHeaders({
"cache-control": "max-age=60",
});
const todos = await getTodos(search);
return json(todos);
}
We’ll look at manual invalidation shortly, but all this function says is to cache these API calls for 60 seconds. Set this to whatever you want, and depending on your use case, stale-while-revalidate
might also be worth looking into.
And just like that, our queries are caching.
Note make sure you un-check the checkbox that disables caching in dev tools.
Remember, if your initial navigation on the app is the list page, those search results will be cached internally to SvelteKit, so don’t expect to see anything in DevTools when returning to that search.
Our very first, server-rendered load of our app (assuming we start at the /list
page) will be fetched on the server. SvelteKit will serialize and send this data down to our client. What’s more, it will observe the Cache-Control
header on the response, and will know to use this cached data for that endpoint call within the cache window (which we set to 60 seconds in put example).
After that initial load, when you start searching on the page, you should see network requests from your browser to the /api/todos
list. As you search for things you’ve already searched for (within the last 60 seconds), the responses should load immediately since they’re cached.
What’s especially cool with this approach is that, since this is caching via the browser’s native caching, these calls could (depending on how you manage the cache busting we’ll be looking at) continue to cache even if you reload the page (unlike the initial server-side load, which always calls the endpoint fresh, even if it did it within the last 60 seconds).
Obviously data can change anytime, so we need a way to purge this cache manually, which we’ll look at next.
Right now, data will be cached for 60 seconds. No matter what, after a minute, fresh data will be pulled from our datastore. You might want a shorter or longer time period, but what happens if you mutate some data and want to clear your cache immediately so your next query will be up to date? We’ll solve this by adding a query-busting value to the URL we send to our new /todos
endpoint.
Let’s store this cache busting value in a cookie. That value can be set on the server but still read on the client. Let’s look at some sample code.
We can create a +layout.server.js
file at the very root of our routes
folder. This will run on application startup, and is a perfect place to set an initial cookie value.
export function load({ cookies, isDataRequest }) {
const initialRequest = !isDataRequest;
const cacheValue = initialRequest ? +new Date() : cookies.get("todos-cache");
if (initialRequest) {
cookies.set("todos-cache", cacheValue, { path: "/", httpOnly: false });
}
return {
todosCacheBust: cacheValue,
};
}
You may have noticed the isDataRequest
value. Remember, layouts will re-run anytime client code calls invalidate()
, or anytime we run a server action (assuming we don’t turn off default behavior). isDataRequest
indicates those re-runs, and so we only set the cookie if that’s false
; otherwise, we send along what’s already there.
The httpOnly: false
flag is also significant. This allows our client code to read these cookie values in document.cookie
. This would normally be a security concern, but in our case these are meaningless numbers that allow us to cache or cache bust.
Our universal loader is what calls our /todos
endpoint. This runs on the server or the client, and we need to read that cache value we just set up no matter where we are. It’s relatively easy if we’re on the server: we can call await parent()
to get the data from parent layouts. But on the client, we’ll need to use some gross code to parse document.cookie
:
export function getCookieLookup() {
if (typeof document !== "object") {
return {};
}
return document.cookie.split("; ").reduce((lookup, v) => {
const parts = v.split("=");
lookup[parts[0]] = parts[1];
return lookup;
}, {});
}
const getCurrentCookieValue = name => {
const cookies = getCookieLookup();
return cookies[name] ?? "";
};
Fortunately, we only need it once.
But now we need to send this value to our /todos
endpoint.
import { getCurrentCookieValue } from "$lib/util/cookieUtils";
export async function load({ fetch, parent, url, setHeaders }) {
const parentData = await parent();
const cacheBust = getCurrentCookieValue("todos-cache") || parentData.todosCacheBust;
const search = url.searchParams.get("search") || "";
const resp = await fetch(`/api/todos?search=${encodeURIComponent(search)}&cache=${cacheBust}`);
const todos = await resp.json();
return {
todos,
};
}
getCurrentCookieValue('todos-cache')
has a check in it to see if we’re on the client (by checking the type of document), and returns nothing if we are, at which point we know we’re on the server. Then it uses the value from our layout.
But how do we actually update that cache busting value when we need to? Since it’s stored in a cookie, we can call it like this from any server action:
cookies.set("todos-cache", cacheValue, { path: "/", httpOnly: false });
It’s all downhill from here; we’ve done the hard work. We’ve covered the various web platform primitives we need, as well as where they go. Now let’s have some fun and write application code to tie it all together.
For reasons that’ll become clear in a bit, let’s start by adding an editing functionality to our /list
page. We’ll add this second table row for each todo:
import { enhance } from "$app/forms";
<tr>
<td colspan="4">
<form use:enhance method="post" action="?/editTodo">
<input name="id" value="{t.id}" type="hidden" />
<input name="title" value="{t.title}" />
<button>Save</button>
</form>
</td>
</tr>
And, of course, we’ll need to add a form action for our /list
page. Actions can only go in .server
pages, so we’ll add a +page.server.js
in our /list
folder. (Yes, a +page.server.js
file can co-exist next to a +page.js
file.)
import { getTodo, updateTodo, wait } from "$lib/data/todoData";
export const actions = {
async editTodo({ request, cookies }) {
const formData = await request.formData();
const id = formData.get("id");
const newTitle = formData.get("title");
await wait(250);
updateTodo(id, newTitle);
cookies.set("todos-cache", +new Date(), { path: "/", httpOnly: false });
},
};
We’re grabbing the form data, forcing a delay, updating our todo, and then, most importantly, clearing our cache bust cookie.
Let’s give this a shot. Reload your page, then edit one of the to-do items. You should see the table value update after a moment. If you look in the Network tab in DevToold, you’ll see a fetch to the /todos
endpoint, which returns your new data. Simple, and works by default.
What if we want to avoid that fetch that happens after we update our to-do item, and instead, update the modified item right on the screen?
This isn’t just a matter of performance. If you search for “post” and then remove the word “post” from any of the to-do items in the list, they’ll vanish from the list after the edit since they’re no longer in that page’s search results. You could make the UX better with some tasteful animation for the exiting to-do, but let’s say we wanted to not re-run that page’s load function but still clear the cache and update the modified to-do so the user can see the edit. SvelteKit makes that possible — let’s see how!
First, let’s make one little change to our loader. Instead of returning our to-do items, let’s return a writeable store containing our to-dos.
return {
todos: writable(todos),
};
Before, we were accessing our to-dos on the data
prop, which we do not own and cannot update. But Svelte lets us return our data in their own store (assuming we’re using a universal loader, which we are). We just need to make one more tweak to our /list
page.
Instead of this:
{#each todos as t}
…we need to do this since todos
is itself now a store.:
{#each $todos as t}
Now our data loads as before. But since todos
is a writeable store, we can update it.
First, let’s provide a function to our use:enhance
attribute:
<form
use:enhance={executeSave}
on:submit={runInvalidate}
method="post"
action="?/editTodo"
>
This will run before a submit. Let’s write that next:
function executeSave({ data }) {
const id = data.get("id");
const title = data.get("title");
return async () => {
todos.update(list =>
list.map(todo => {
if (todo.id == id) {
return Object.assign({}, todo, { title });
} else {
return todo;
}
})
);
};
}
This function provides a data
object with our form data. We return an async function that will run after our edit is done. The docs explain all of this, but by doing this, we shut off SvelteKit’s default form handling that would have re-run our loader. This is exactly what we want! (We could easily get that default behavior back, as the docs explain.)
We now call update
on our todos
array since it’s a store. And that’s that. After editing a to-do item, our changes show up immediately and our cache is cleared (as before, since we set a new cookie value in our editTodo
form action). So, if we search and then navigate back to this page, we’ll get fresh data from our loader, which will correctly exclude any updated to-do items that were updated.
The code for the immediate updates is available at GitHub.
We can set cookies in any server load function (or server action), not just the root layout. So, if some data are only used underneath a single layout, or even a single page, you could set that cookie value there. Moreoever, if you’re not doing the trick I just showed manually updating on-screen data, and instead want your loader to re-run after a mutation, then you could always set a new cookie value right in that load function without any check against isDataRequest
. It’ll set initially, and then anytime you run a server action that page layout will automatically invalidate and re-call your loader, re-setting the cache bust string before your universal loader is called.
Let’s wrap-up by building one last feature: a reload button. Let’s give users a button that will clear cache and then reload the current query.
We’ll add a dirt simple form action:
async reloadTodos({ cookies }) {
cookies.set('todos-cache', +new Date(), { path: '/', httpOnly: false });
},
In a real project you probably wouldn’t copy/paste the same code to set the same cookie in the same way in multiple places, but for this post we’ll optimize for simplicity and readability.
Now let’s create a form to post to it:
<form method="POST" action="?/reloadTodos" use:enhance>
<button>Reload todos</button>
</form>
That works!
We could call this done and move on, but let’s improve this solution a bit. Specifically, let’s provide feedback on the page to tell the user the reload is happening. Also, by default, SvelteKit actions invalidate everything. Every layout, page, etc. in the current page’s hierarchy would reload. There might be some data that’s loaded once in the root layout that we don’t need to invalidate or re-load.
So, let’s focus things a bit, and only reload our to-dos when we call this function.
First, let’s pass a function to enhance:
<form method="POST" action="?/reloadTodos" use:enhance={reloadTodos}>
import { enhance } from "$app/forms";
import { invalidate } from "$app/navigation";
let reloading = false;
const reloadTodos = () => {
reloading = true;
return async () => {
invalidate("reload:todos").then(() => {
reloading = false;
});
};
};
We’re setting a new reloading
variable to true
at the start of this action. And then, in order to override the default behavior of invalidating everything, we return an async
function. This function will run when our server action is finished (which just sets a new cookie).
Without this async
function returned, SvelteKit would invalidate everything. Since we’re providing this function, it will invalidate nothing, so it’s up to us to tell it what to reload. We do this with the invalidate
function. We call it with a value of reload:todos
. This function returns a promise, which resolves when the invalidation is complete, at which point we set reloading
back to false
.
Lastly, we need to sync our loader up with this new reload:todos
invalidation value. We do that in our loader with the depends
function:
export async function load({ fetch, url, setHeaders, depends }) {
depends('reload:todos');
// rest is the same
And that’s that. depends
and invalidate
are incredibly useful functions. What’s cool is that invalidate
doesn’t just take arbitrary values we provide like we did. We can also provide a URL, which SvelteKit will track, and invalidate any loaders that depend on that URL. To that end, if you’re wondering whether we could skip the call to depends
and invalidate our /api/todos
endpoint altogether, you can, but you have to provide the exact URL, including the search
term (and our cache value). So, you could either put together the URL for the current search, or match on the path name, like this:
invalidate(url => url.pathname == "/api/todos");
Personally, I find the solution that uses depends
more explicit and simple. But see the docs for more info, of course, and decide for yourself.
If you’d like to see the reload button in action, the code for it is in this branch of the repo.
This was a long post, but hopefully not overwhelming. We dove into various ways we can cache data when using SvelteKit. Much of this was just a matter of using web platform primitives to add the correct cache, and cookie values, knowledge of which will serve you in web development in general, beyond just SvelteKit.
Moreover, this is something you absolutely do not need all the time. Arguably, you should only reach for these sort of advanced features when you actually need them. If your datastore is serving up data quickly and efficiently, and you’re not dealing with any kind of scaling problems, there’s no sense in bloating your application code with needless complexity doing the things we talked about here.
As always, write clear, clean, simple code, and optimize when necessary. The purpose of this post was to provide you those optimization tools for when you truly need them. I hope you enjoyed it!
Caching Data in SvelteKit originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
]]>