third-party – CSS-Tricks https://css-tricks.com Tips, Tricks, and Techniques on using Cascading Style Sheets. Thu, 09 Feb 2023 19:45:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://i0.wp.com/css-tricks.com/wp-content/uploads/2021/07/star.png?fit=32%2C32&ssl=1 third-party – CSS-Tricks https://css-tricks.com 32 32 45537868 Healthcare, Selling Lemons, and the Price of Developer Experience https://css-tricks.com/healthcare-selling-lemons-and-the-price-of-developer-experience/ https://css-tricks.com/healthcare-selling-lemons-and-the-price-of-developer-experience/#comments Thu, 09 Feb 2023 19:45:48 +0000 https://css-tricks.com/?p=377169 Every now and then, a one blog post is published and it spurs a reaction or response in others that are, in turn, published as blogs posts, and a theme starts to emerge. That’s what happened this past week and …


Healthcare, Selling Lemons, and the Price of Developer Experience originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Every now and then, a one blog post is published and it spurs a reaction or response in others that are, in turn, published as blogs posts, and a theme starts to emerge. That’s what happened this past week and the theme developed around the cost of JavaScript frameworks — a cost that, in this case, reveals just how darn important it is to use JavaScript responsibly.

Eric Bailey: Modern Health, frameworks, performance, and harm

This is where the story begins. Eric goes to a health service provider website to book an appointment and gets… a blank screen.

In addition to a terrifying amount of telemetry, Modern Health’s customer-facing experience is delivered using React and Webpack.

If you are familiar with how the web is built, what happened is pretty obvious: A website that over-relies on JavaScript to power its experience had its logic collide with one or more other errant pieces of logic that it summons. This created a deadlock.

If you do not make digital experiences for a living, what happened is not obvious at all. All you see is a tiny fake loading spinner that never stops.

D’oh. This might be mere nuisance — or even laughable — in some situations, but not when someone’s health is on the line:

A person seeking help in a time of crisis does not care about TypeScript, tree shaking, hot module replacement, A/B tests, burndown charts, NPS, OKRs, KPIs, or other startup jargon. Developer experience does not count for shit if the person using the thing they built can’t actually get what they need.

This is the big smack of reality. What happens when our tooling and reporting — the very things that are supposed to make our work more effective — get in the way of the user experience? These are tools that provide insights that can help us anticipate a user’s needs, especially in a time of need.

I realize that pointing the finger at JavaScript frameworks is already divisive. But this goes beyond whether you use React or framework d’jour. It’s about business priorities and developer experience conflicting with user experiences.

Alex Russell: The Market for Lemons

Partisans for slow, complex frameworks have successfully marketed lemons as the hot new thing, despite the pervasive failures in their wake, crowding out higher-quality options in the process.

These technologies were initially pitched on the back of “better user experiences”, but have utterly failed to deliver on that promise outside of the high-management-maturity organisations in which they were born. Transplanted into the wider web, these new stacks have proven to be expensive duds.

There’s the rub. Alex ain’t mincing words, but notice that the onus is on the way frameworks haved been marketed to developers than developers themselves. The sales pitch?

Once the lemon sellers embed the data-light idea that improved “Developer Experience” (“DX”) leads to better user outcomes, improving “DX” became and end unto itself, and many who knew better felt forced to play along. The long lead times in falsifying trickle-down UX was a feature, not a bug; they don’t need you to succeed, only to keep buying.

As marketing goes, the “DX” bait-and-switch is brilliant, but the tech isn’t delivering for anyone but developers.

Tough to stomach, right? No one wants to be duped, and it’s tough to admit a sunken cost when there is one. It gets downright personal if you’ve invested time in a specific piece of tech and effort integrating it into your stack. Development workflows are hard and settling into one is sorta like settling into a house you plan on living in a little while. But you’d want to know if your house was built on what Alex calls a “sandy foundation”.

I’d just like to pause here a moment to say I have no skin in this debate. As a web generalist, I tend to adopt new tools early for familiarity then drop them fast, relegating them to my toolshed until I find a good use for them. In other words, my knowledge is wide but not very deep in one area or thing. HTML, CSS, and JavaScript is my go-to cocktail, but I do care a great deal about user experience and know when to reach for a tool to solve a particular thing.

And let’s acknowledge that not everyone has a say in the matter. Many of us work on managed teams that are prescribed the tools we use. Alex says as much, which I think is important to call out because it’s clear this isn’t meant to be personal. It’s a statement on our priorities and making sure they along to user expectations.

Let’s alow Chris to steer us back to the story…

Chris Coyier: End-To-End Tests with Content Blockers?

So, maybe your app is built on React and it doesn’t matter why it’s that way. There’s still work to do to ensure the app is reliable and accessible.

Just blocking a file shouldn’t totally wreck a website, but it often does! In JavaScript, that may be because the developers have written first-party JavaScript (which I’ll generally allow) that depends on third-party JavaScript (which I’ll generally block).

[…]

If I block resources from tracking-website.com, now my first-party JavaScript is going to throw an error. JavaScript isn’t chill. If an error is thrown, it doesn’t execute more JavaScript further down in the file. If further down in that file is transitionToOnboarding();— that ain’t gonna work.

Maybe it’s worth revisiting your workflow and tweaking it to account to identify more points of failure.

So here’s an idea: Run your end-to-end tests in browsers that have popular content blockers with default configs installed. 

Doing so may uncover problems like this that stop your customers, and indeed people in need, from being stopped in their tracks.

Good idea! Hey, anything that helps paint a more realistic picture of how the app is used. That sort of clarity could happen a lot earlier in the process, perhaps before settling on development decisions. Know your users. Why are they using the app? How do they browse the web? Where are they phsically located? What problems could get in their way? Chris has a great talk on that, too.


Healthcare, Selling Lemons, and the Price of Developer Experience originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/healthcare-selling-lemons-and-the-price-of-developer-experience/feed/ 3 377169
Remove Trackers https://css-tricks.com/remove-trackers/ https://css-tricks.com/remove-trackers/#comments Wed, 15 Dec 2021 15:20:14 +0000 https://css-tricks.com/?p=358026 Earlier this week, I tried out a starter theme for a blog platform. The theme had loads of nice default features: pretty typography, fancy navigation, dark mode widget… and a couple of default trackers I really don’t want just sitting …


Remove Trackers originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Earlier this week, I tried out a starter theme for a blog platform. The theme had loads of nice default features: pretty typography, fancy navigation, dark mode widget… and a couple of default trackers I really don’t want just sitting there in a header component, waiting for me to add my account information.

As web development has become increasingly complex, more starters, frameworks, and embeddable tools have been created to simplify our developer experience. Just paste this one line of code into the <head> of your site, and you’ll be a 10× full stack developer in no time. Sometimes we’ll pull out a feature we don’t want or the code we don’t need, but who has the time for a line-by-line review? If you got a feature for free, you might as well use it!

Over-simplifying our setup is risky. When we don’t fully understand what we’ve embedded on our site, we give up control of that feature to an unknown third party. We assume the maintainer knows best because the repository has a load of stars on GitHub or because a big name uses that same script on their site. Somebody must have checked this package is legit, right?

The malicious risk

Malicious scripts for password jacking and other nefarious purposes are sometimes found in popular npm packages. Cryptojacking, where crypto miners are installed on your site without your knowledge, are more common. Just recently, Alibaba Cloud services were targeted to mine the Monero cryptocurrency. If we’re a customer of a hacked service, we might hope our provider lets us know if our site is hacked in a timely fashion. If it’s an open source package, we’ve just got to hope we’re online when someone discovers the vulnerability so we can get our sites updated quickly.

The reliability risk

Many third parties we rely on for critical features are just incompetent or unreliable. We make jokes when some big internet infrastructure goes down and leaves us with no choice but to take it easy at work, but it’s not all fun and games for sites providing vital utilities and information for people.

For many years, we’ve been sold on third-party solutions to improve performance issues on our sites. And, occasionally, you will find a service that genuinely serves your site faster and in more locations across the globe than you can throw together yourself. But the majority of the most popular sites on the web have way more than one third-party script embedded on their site. From my work on Better Blocker, I can tell you that around ten third-party scripts is low, and as many as thirty on one homepage is common, especially on news sites. Does that many third-party scripts on one page have a positive impact on performance?

The privacy risk

Whether or not the third-party feature we’ve installed on our site is nefarious, incompetent, unreliable, or does its job, it’s always a privacy risk for our site visitors.

In a world where developer experience is often the priority, it’s too easy to forget we’re using these tools to build experiences for other people. And we have a responsibility to build experiences that don’t put our site’s visitors at risk.

Any third-party script, or any resource that can log visitor information, can be considered a tracker. At best, it has tracker potential. Your analytics, fonts, iframes, content delivery networks, CAPTCHAs — they all have the potential to collect information about your site’s visitors. What information, how much, and how often depends on what the feature does and the access you’ve provided it. That information collected about an individual could be used to sell them ads, build sellable profiles of them or even be used to discriminate against them.

I know privacy isn’t a popular topic in the web community. It feels like we’re already near the bottom of the slippery slope… and sometimes it’s easier to give up than to address how reliant we are on privacy-exploiting funding models. But there are small changes we can make to protect our visitors, even if we simply start with our own personal projects.

1. Review the third-party tools you use

Do you really need two different analytics scripts on your site? Can you embed that font locally instead? Reviewing the tools you already use gives you a manageable way to improve the privacy of your project a little at a time. And you’ll get a bonus when your site’s performance improves.

2. Use privacy-respecting alternatives

Over the last few years, privacy-respecting alternatives to mainstream technology have become more popular. One of my favorite sites is switching.software, which helps you find alternatives to the popular tools you use every day. Good Reports is another one that explains the reasoning behind each recommendation.

Privacy isn’t as hard as giving up the defaults

In this post, I decided not to go into the legal issues around privacy on the web. Making sites that adhere to laws and regulations around respecting rights is essential, but we’re more likely to make great experiences for the people using our sites if we care about their privacy, rather than worrying about what we can get away with on the legal side.

Privacy is not as hard as giving up the defaults, the tools that save us time, or uncritically copying our colleagues’ approaches. But removing one tracker at a time, we can make a difference.


Remove Trackers originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/remove-trackers/feed/ 1 358026
Ain’t No Party Like a Third Party https://css-tricks.com/aint-no-party-like-a-third-party/ https://css-tricks.com/aint-no-party-like-a-third-party/#comments Fri, 03 Dec 2021 15:45:19 +0000 https://css-tricks.com/?p=356844 I’d like to tell you something not to do to make your website better. Don’t add any third-party scripts to your site.

That may sound extreme, but at one time it would’ve been common sense. On today’s …


Ain’t No Party Like a Third Party originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
I’d like to tell you something not to do to make your website better. Don’t add any third-party scripts to your site.

That may sound extreme, but at one time it would’ve been common sense. On today’s modern web it sounds like advice from a tinfoil-hat-wearing conspiracy nut. But just because I’m paranoid doesn’t mean they’re not out to get your user’s data.

All I’m asking is that we treat third-party scripts like third-party cookies. They were a mistake.

Browsers are now beginning to block third-party cookies. Chrome is dragging its heels because the same company that makes the browser also runs an advertising business. But even they can’t resist the tide. Third-party cookies are used almost exclusively for tracking. That was never the plan.

In the beginning, there was no state on the web. A client requested a resource from a server. The server responded. Then they both promptly forgot about it. That made it hard to build shopping carts or log-ins. That’s why we got cookies.

In hindsight, cookies should’ve been limited to a same-origin policy from day one. That would’ve solved the problems of authentication and commerce without opening up a huge security hole that has been exploited to track people as they moved from one website to another. The web went from having no state to having too much.

Now that vulnerability is finally being closed. But only for cookies. I would love it if third-party JavaScript got the same treatment.

When you add any third-party file to your website—an image, a stylesheet, a font—it’s a potential vector for tracking. But third-party JavaScript files go one further. They can execute arbitrary code.

Just take a minute to consider the implications of that: any third-party script on your site is allowing someone else to execute code on your web pages. That’s astonishingly unsafe.

It gets better. One of the pieces of code that this invited intruder can execute is the ability to pull in other third-party scripts.

You might think there’s no harm in adding that one little analytics script. Or that one little Google Tag Manager snippet. It’s such a small piece of code, after all. But in doing that, you’ve handed over your keys to a stranger. And now they’re welcoming in all their shady acquaintances.

Request Map Generator is a great tool for visualizing the resources being loaded on any web page. Try pasting in the URL of an interesting article from a news outlet or magazine that someone sent you recently. Then marvel at the sheer size and number of third-party scripts that sneak in via one tiny script element on the original page.

That’s why I recommend that the one thing people can do to make their website better is to not add third-party scripts.

Easier said than done, right? Especially if you’re working on a site that currently relies on third-party tracking for its business model. But that exploitative business model won’t change unless people like us are willing to engage in a campaign of passive resistance.

I know, I know. If you refuse to add that third-party script, your boss will probably say, “Fine, I’ll get someone else to do it. Also, you’re fired.”

This tactic will only work if everyone agrees to do what’s right. We need to have one another’s backs. We need to support one another. The way people support one another in the workplace is through a union.

So I think I’d like to change my answer to the question that’s been posed.

The one thing people can do to make their website better is to unionize.


Ain’t No Party Like a Third Party originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/aint-no-party-like-a-third-party/feed/ 6 356844
Proxying Third-Party JavaScript as First-Party JavaScript (and the Potential Effect on Analytics) https://css-tricks.com/proxying-third-party-javascript-as-first-party-javascript-and-the-potential-effect-on-analytics/ https://css-tricks.com/proxying-third-party-javascript-as-first-party-javascript-and-the-potential-effect-on-analytics/#comments Tue, 02 Nov 2021 22:50:37 +0000 https://css-tricks.com/?p=354619 First, check out how incredibly easy it is to write a Cloudflare Worker to proxy another URL:

addEventListener("fetch", (event) ={
  event.respondWith(
     fetch("https://css-tricks.com")
  );
});

It doesn’t have any error handling or anything, but hey, it works:

Now imagine how …


Proxying Third-Party JavaScript as First-Party JavaScript (and the Potential Effect on Analytics) originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
First, check out how incredibly easy it is to write a Cloudflare Worker to proxy another URL:

addEventListener("fetch", (event) => {
  event.respondWith(
     fetch("https://css-tricks.com")
  );
});

It doesn’t have any error handling or anything, but hey, it works:

Now imagine how some websites give you a URL to JavaScript in order to do stuff. CodePen does this for our Embedded Pens feature.

That URL is:

https://cpwebassets.codepen.io/assets/embed/ei.js

I can proxy that URL just as easily:

Doing nothing special, it even serves up the right content-type header and everything:

Cloudflare Workers gives you a URL for them, which is decently nice, but you can also very easily “Add a Route” to a worker on your own website. So, here I’ll make a URL on CSS-Tricks to serve up that Worker. Lookie lookie, it does just what it says it’s going to do:

CSS-Tricks.com serving up a JavaScript file that is actually just proxied from CodePen. I’m probably not going to leave this URL live, it’s just a demo.

So now, I could do….

<script src="/super-real-url/codepen-embeds.js"></script>

Right from css-tricks.com and it’ll load that JavaScript. It will look to the browser like first-party JavaScript, but it will really be proxied third-party JavaScript.

Why? Well nobody is going to block your first-party JavaScript. If you were a bit slimy, you could run all your scripts for ads this way to avoid ad blockers. I have mixed feelings there. I feel like if you wanna block ads you should be able to block ads without having to track down specific scripts on specific sites to do that. On the other hand, proxying some third-party resources sometimes seems kinda fine? Like if it’s your own site and you’re just trying to get around some CORS issue… that would be fine.

More in the middle is something like analytics. I recently blogged “Comparing Google Analytics and Plausible Numbers” where I discussed Plausible, a third-party analytics service that “is built for privacy-conscious site owners.” So, ya know, theoretically trustable and not third-party JavaScript that is terribly worrisome. But still, it doesn’t do anything to really help site visitors and is in the broad category of analytics, so I could see it making its way onto blocklists, thus giving you less accurate information over time as more and more people block it.

The default usage for Plausible is third-party JavaScript

But as we talked about, very few people are going to block first-party JavaScript, so proxying would theoretically deliver more accurate information. In fact, they have docs for proxying. It’s slightly more involved, and it’s over my head as to exactly why, but hey, it works.

I’ve done this proxying as a test. So now I have data from just using the third-party JavaScript directly (from the last article):

MetricPlausible (No Proxy)Google Analytics
Unique Visitors973k841k
Pageviews1.4m1.5m
Bounce Rate82%82%
Visit Duration1m 31s1m 24s
Data from one week of non-proxied third-party JavaScript integration

And can compare it to an identical-in-length time period using the proxy:

MetricPlausible (Proxy)Google Analytics
Unique Visitors1.32m895k
Pageviews2.03m1.7m
Bounce Rate81%82%
Visit Duration1m 35s1m 24s
Data from one week of proxied third-party JavaScript integration

So the proxy really does highly suggest that doing it that way is far less “blocked” than even out-of-the-box Plausible is. The week tested was 6%¹ busier according to the unchanged Google Analytics. I would have expected to see 15.7% more Unique Visitors that week based on what happened with the non-proxied setup (meaning 1.16m), but instead I saw 1.32m, so the proxy demonstrates a solid 13.8% increase in seeing unique visitors versus a non-proxy setup. And comparing the proxied Plausible setup to Google Analytics directly shows a pretty staggering 32% more unique visitors.

With the non-proxied setup, I actually saw a decrease in pageviews (-6.6%) on Plausible compared to Google Analytics, but with the proxied setup I’m seeing 19.4% more pageviews. So the numbers are pretty wishy-washy but, for this website, suggest something in the ballpark of 20-30% of users blocking Google Analytics.

  1. I always find it so confusing to figure out the percentage increase between two numbers. The trick that ultimately works for my brain is (final - initial) / final * 100.

Proxying Third-Party JavaScript as First-Party JavaScript (and the Potential Effect on Analytics) originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/proxying-third-party-javascript-as-first-party-javascript-and-the-potential-effect-on-analytics/feed/ 9 354619
Securing Your Website With Subresource Integrity https://css-tricks.com/securing-your-website-with-subresource-integrity/ https://css-tricks.com/securing-your-website-with-subresource-integrity/#comments Mon, 14 Jun 2021 13:30:27 +0000 https://css-tricks.com/?p=342070 When you load a file from an external server, you’re trusting that the content you request is what you expect it to be. Since you don’t manage the server yourself, you’re relying on the security of yet another third party …


Securing Your Website With Subresource Integrity originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
When you load a file from an external server, you’re trusting that the content you request is what you expect it to be. Since you don’t manage the server yourself, you’re relying on the security of yet another third party and increasing the attack surface. Trusting a third party is not inherently bad, but it should certainly be taken into consideration in the context of your website’s security.

A real-world example

This isn’t a purely theoretical danger. Ignoring potential security issues can and has already resulted in serious consequences. On June 4th, 2019, Malwarebytes announced their discovery of a malicious skimmer on the website NBA.com. Due to a compromised Amazon S3 bucket, attackers were able to alter a JavaScript library to steal credit card information from customers.

It’s not only JavaScript that’s worth worrying about, either. CSS is another resource capable of performing dangerous actions such as password stealing, and all it takes is a single compromised third-party server for disaster to strike. But they can provide invaluable services that we can’t simply go without, such as CDNs that reduce the total bandwidth usage of a site and serve files to the end-user much faster due to location-based caching. So it’s established that we need to sometimes rely on a host that we have no control over, but we also need to ensure that the content we receive from it is safe. What can we do?

Solution: Subresource Integrity (SRI)

SRI is a security policy that prevents the loading of resources that don’t match an expected hash. By doing this, if an attacker were to gain access to a file and modify its contents to contain malicious code, it wouldn’t match the hash we were expecting and not execute at all.

Doesn’t HTTPS do that already?

HTTPS is great for security and a must-have for any website, and while it does prevent similar problems (and much more), it only protects against tampering with data-in-transit. If a file were to be tampered with on the host itself, the malicious file would still be sent over HTTPS, doing nothing to prevent the attack.

How does hashing work?

A hashing function takes data of any size as input and returns data of a fixed size as output. Hashing functions would ideally have a uniform distribution. This means that for any input, x, the probability that the output, y, will be any specific possible value is similar to the probability of it being any other value within the range of outputs.

Here’s a metaphor:

Suppose you have a 6-sided die and a list of names. The names, in this case, would be the hash function’s “input” and the number rolled would be the function’s “output.” For each name in the list, you’ll roll the die and keep track of what name each number number corresponds to, by writing the number next to the name. If a name is used as input more than once, its corresponding output will always be what it was the first time. For the first name, Alice, you roll 4. For the next, John, you roll 6. Then for Bob, Mary, William, Susan, and Joseph, you get 2, 2, 5, 1, and 1, respectively. If you use “John” as input again, the output will once again be 6. This metaphor describes how hash functions work in essence.

Name (input)Number rolled (output)
Alice4
John6
Bob2
Mary2
William5
Susan1
Joseph1

You may have noticed that, for example, Bob and Mary have the same output. For hashing functions, this is called a “collision.” For our example scenario, it inevitably happens. Since we have seven names as inputs and only six possible outputs, we’re guaranteed at least one collision.

A notable difference between this example and a hash function in practice is that practical hash functions are typically deterministic, meaning they don’t make use of randomness like our example does. Rather, it predictably maps inputs to outputs so that each input is equally likely to map to any particular output.

SRI uses a family of hashing functions called the secure hash algorithm (SHA). This is a family of cryptographic hash functions that includes 128, 256, 384, and 512-bit variants. A cryptographic hash function is a more specific kind of hash function with the properties being effectively impossible to reverse to find the original input (without already having the corresponding input or brute-forcing), collision-resistant, and designed so a small change in the input alters the entire output. SRI supports the 256, 384, and 512-bit variants of the SHA family.

Here’s an example with SHA-256:

For example. the output for hello is:

2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824

And the output for hell0 (with a zero instead of an O) is:

bdeddd433637173928fe7202b663157c9e1881c3e4da1d45e8fff8fb944a4868

You’ll notice that the slightest change in the input will produce an output that is completely different. This is one of the properties of cryptographic hashes listed earlier.

The format you’ll see most frequently for hashes is hexadecimal, which consists of all the decimal digits (0-9) and the letters A through F. One of the benefits of this format is that every two characters represent a byte, and the evenness can be useful for purposes such as color formatting, where a byte represents each color. This means a color without an alpha channel can be represented with only six characters (e.g., red = ff0000)

This space efficiency is also why we use hashing instead of comparing the entirety of a file to the data we’re expecting each time. While 256 bits cannot represent all of the data in a file that is greater than 256 bits without compression, the collision resistance of SHA-256 (and 384, 512) ensures that it’s virtually impossible to find two hashes for differing inputs that match. And as for SHA-1, it’s no longer secure, as a collision has been found.

Interestingly, the appeal of compactness is likely one of the reasons that SRI hashes don’t use the hexadecimal format, and instead use base64. This may seem like a strange decision at first, but when we take into consideration the fact that these hashes will be included in the code and that base64 is capable of conveying the same amount of data as hexadecimal while being 33% shorter, it makes sense. A single character of base64 can be in 64 different states, which is 6 bits worth of data, whereas hex can only represent 16 states, or 4 bits worth of data. So if, for example, we want to represent 32 bytes of data (256 bits), we would need 64 characters in hex, but only 44 characters in base64. When we using longer hashes, such as sha384/512, base64 saves a great deal of space.

Why does hashing work for SRI?

So let’s imagine there was a JavaScript file hosted on a third-party server that we included in our webpage and we had subresource integrity enabled for it. Now, if an attacker were to modify the file’s data with malicious code, the hash of it would no longer match the expected hash and the file would not execute. Recall that any small change in a file completely changes its corresponding SHA hash, and that hash collisions with SHA-256 and higher are, at the time of this writing, virtually impossible.

Our first SRI hash

So, there are a few methods you can use to compute the SRI hash of a file. One way (and perhaps the simplest) is to use srihash.org, but if you prefer a more programmatic way, you can use:

sha384sum [filename here] | head -c 96 | xxd -r -p | base64
  • sha384sum Computes the SHA-384 hash of a file
  • head -c 96 Trims all but the first 96 characters of the string that is piped into it
    • -c 96 Indicates to trim all but the first 96 characters. We use 96, as it’s the character length of an SHA-384 hash in hexadecimal format
  • xxd -r -p Takes hex input piped into it and converts it into binary
    • -r Tells xxd to receive hex and convert it to binary
    • -p Removes the extra output formatting
  • base64 Simply converts the binary output from xxd to base64

If you decide to use this method, check the table below to see the lengths of each SHA hash.

Hash algorithmBitsBytesHex Characters
SHA-2562563264
SHA-3843844896
SHA-51251264128

For the head -c [x] command, x will be the number of hex characters for the corresponding algorithm.

MDN also mentions a command to compute the SRI hash:

shasum -b -a 384 FILENAME.js | awk '{ print $1 }' | xxd -r -p | base64

awk '{print $1}' Finds the first section of a string (separated by tab or space) and passes it to xxd. $1 represents the first segment of the string passed into it.

And if you’re running Windows:

@echo off
set bits=384
openssl dgst -sha%bits% -binary %1% | openssl base64 -A > tmp
set /p a= < tmp
del tmp
echo sha%bits%-%a%
pause
  • @echo off prevents the commands that are running from being displayed. This is particularly helpful for ensuring the terminal doesn’t become cluttered.
  • set bits=384 sets a variable called bits to 384. This will be used a bit later in the script.
  • openssl dgst -sha%bits% -binary %1% | openssl base64 -A > tmp is more complex, so let’s break it down into parts.
    • openssl dgst computes a digest of an input file.
    • -sha%bits% uses the variable, bits, and combines it with the rest of the string to be one of the possible flag values, sha256, sha384, or sha512.
    • -binary outputs the hash as binary data instead of a string format, such as hexadecimal.
    • %1% is the first argument passed to the script when it’s run.
    • The first part of the command hashes the file provided as an argument to the script.
    • | openssl base64 -A > tmp converts the binary output piping through it into base64 and writes it to a file called tmp. -A outputs the base64 onto a single line.
    • set /p a= <tmp stores the contents of the file, tmp, in a variable, a.
    • del tmp deletes the tmp file.
    • echo sha%bits%-%a% will print out the type of SHA hash type, along with the base64 of the input file.
    • pause Prevents the terminal from closing.

SRI in action

Now that we understand how hashing and SRI hashes work, let’s try a concrete example. We’ll create two files:

// file1.js
alert('Hello, world!');

and:

// file2.js
alert('Hi, world!');

Then we’ll compute the SHA-384 SRI hashes for both:

FilenameSHA-384 hash (base64)
file1.js3frxDlOvLa6GGEUwMh9AowcepHRx/rwFT9VW9yL1wv/OcerR39FEfAUHZRrqaOy2
file2.jshtr1LmWx3PQJIPw5bM9kZKq/FL0jMBuJDxhwdsMHULKybAG5dGURvJIXR9bh5xJ9

Then, let’s create a file named index.html:

<!DOCTYPE html>
<html>
  <head>
    <script type="text/javascript" src="./file1.js" integrity="sha384-3frxDlOvLa6GGEUwMh9AowcepHRx/rwFT9VW9yL1wv/OcerR39FEfAUHZRrqaOy2" crossorigin="anonymous"></script>
    <script type="text/javascript" src="./file2.js" integrity="sha384-htr1LmWx3PQJIPw5bM9kZKq/FL0jMBuJDxhwdsMHULKybAG5dGURvJIXR9bh5xJ9" crossorigin="anonymous"></script>
  </head>
</html>

Place all of these files in the same folder and start a server within that folder (for example, run npx http-server inside the folder containing the files and then open one of the addresses provided by http-server or the server of your choice, such as 127.0.0.1:8080). You should get two alert dialog boxes. The first should say “Hello, world!” and the second, “Hi, world!”

If you modify the contents of the scripts, you’ll notice that they no longer execute. This is subresource integrity in effect. The browser notices that the hash of the requested file does not match the expected hash and refuses to run it.

We can also include multiple hashes for a resource and the strongest hash will be chosen, like so:

<!DOCTYPE html>
<html>
  <head>
    <script
      type="text/javascript"
      src="./file1.js"
      integrity="sha384-3frxDlOvLa6GGEUwMh9AowcepHRx/rwFT9VW9yL1wv/OcerR39FEfAUHZRrqaOy2 sha512-cJpKabWnJLEvkNDvnvX+QcR4ucmGlZjCdkAG4b9n+M16Hd/3MWIhFhJ70RNo7cbzSBcLm1MIMItw
9qks2AU+Tg=="
       crossorigin="anonymous"></script>
    <script 
      type="text/javascript"
      src="./file2.js"
      integrity="sha384-htr1LmWx3PQJIPw5bM9kZKq/FL0jMBuJDxhwdsMHULKybAG5dGURvJIXR9bh5xJ9 sha512-+4U2wdug3VfnGpLL9xju90A+kVEaK2bxCxnyZnd2PYskyl/BTpHnao1FrMONThoWxLmguExF7vNV
WR3BRSzb4g=="
      crossorigin="anonymous"></script>
  </head>
</html>

The browser will choose the hash that is considered to be the strongest and check the file’s hash against it.

Why is there a “crossorigin” attribute?

The crossorigin attribute tells the browser when to send the user credentials with the request for the resource. There are two options to choose from:

Value (crossorigin=)Description
anonymousThe request will have its credentials mode set to same-origin and its mode set to cors.
use-credentialsThe request will have its credentials mode set to include and its mode set to cors.

Request credentials modes mentioned

Credentials modeDescription
same-originCredentials will be sent with requests sent to same-origin domains and credentials that are sent from same-origin domains will be used.
includeCredentials will be sent to cross-origin domains as well and credentials sent from cross-origin domains will be used.

Request modes mentioned

Request modeDescription
corsThe request will be a CORS request, which will require the server to have a defined CORS policy. If not, the request will throw an error.

Why is the “crossorigin” attribute required with subresource integrity?

By default, scripts and stylesheets can be loaded cross-origin, and since subresource integrity prevents the loading of a file if the hash of the loaded resource doesn’t match the expected hash, an attacker could load cross-origin resources en masse and test if the loading fails with specific hashes, thereby inferring information about a user that they otherwise wouldn’t be able to.

When you include the crossorigin attribute, the cross-origin domain must choose to allow requests from the origin the request is being sent from in order for the request to be successful. This prevents cross-origin attacks with subresource integrity.

Using subresource integrity with webpack

It probably sounds like a lot of work to recalculate the SRI hashes of each file every time they are updated, but luckily, there’s a way to automate it. Let’s walk through an example together. You’ll need a few things before you get started.

Node.js and npm

Node.js is a JavaScript runtime that, along with npm (its package manager), will allow us to use webpack. To install it, visit the Node.js website and choose the download that corresponds to your operating system.

Setting up the project

Create a folder and give it any name with mkdir [name of folder]. Then type cd [name of folder] to navigate into it. Now we need to set up the directory as a Node project, so type npm init. It will ask you a few questions, but you can press Enter to skip them since they’re not relevant to our example.

webpack

webpack is a library that allows you automatically combine your files into one or more bundles. With webpack, we will no longer need to manually update the hashes. Instead, webpack will inject the resources into the HTML with integrity and crossorigin attributes included.

Installing webpack

Yu’ll need to install webpack and webpack-cli:

npm i --save-dev webpack webpack-cli 

The difference between the two is that webpack contains the core functionalities whereas webpack-cli is for the command line interface.

We’ll edit our package.json to add a scripts section like so:

{
  //... rest of package.json ...,
  "scripts": {
    "dev": "webpack --mode=development"
  }
  //... rest of package.json ...,
}

This enable us to run npm run dev and build our bundle.

Setting up webpack configuration

Next, let’s set up the webpack configuration. This is necessary to tell webpack what files it needs to deal with and how.

First, we’ll need to install two packages, html-webpack-plugin, and webpack-subresource-integrity:

npm i --save-dev html-webpack-plugin webpack-subresource-integrity style-loader css-loader
Package nameDescription
html-webpack-pluginCreates an HTML file that resources can be injected into
webpack-subresource-integrityComputes and inserts subresource integrity information into resources such as <script> and <link rel=…>
style-loaderApplies the CSS styles that we import
css-loaderEnables us to import css files into our JavaScript

Setting up the configuration:

const path              = require('path'),
      HTMLWebpackPlugin = require('html-webpack-plugin'),
      SriPlugin         = require('webpack-subresource-integrity');

module.exports = {
  output: {
    // The output file's name
    filename: 'bundle.js',
    // Where the output file will be placed. Resolves to 
    // the "dist" folder in the directory of the project
    path: path.resolve(__dirname, 'dist'),
    // Configures the "crossorigin" attribute for resources 
    // with subresource integrity injected
    crossOriginLoading: 'anonymous'
  },
  // Used for configuring how various modules (files that 
  // are imported) will be treated
  modules: {
    // Configures how specific module types are handled
    rules: [
      {
        // Regular expression to test for the file extension.
        // These loaders will only be activated if they match
        // this expression.
        test: /\.css$/,
        // An array of loaders that will be applied to the file
        use: ['style-loader', 'css-loader'],
        // Prevents the accidental loading of files within the
        // "node_modules" folder
        exclude: /node_modules/
      }
    ]
  },
  // webpack plugins alter the function of webpack itself
  plugins: [
    // Plugin that will inject integrity hashes into index.html
    new SriPlugin({
      // The hash functions used (e.g. 
      // <script integrity="sha256- ... sha384- ..." ...
      hashFuncNames: ['sha384']
    }),
    // Creates an HTML file along with the bundle. We will
    // inject the subresource integrity information into 
    // the resources using webpack-subresource-integrity
    new HTMLWebpackPlugin({
      // The file that will be injected into. We can use 
      // EJS templating within this file, too
      template: path.resolve(__dirname, 'src', 'index.ejs'),
      // Whether or not to insert scripts and other resources
      // into the file dynamically. For our example, we will
      // enable this.
      inject: true
    })
  ]
};

Creating the template

We need to create a template to tell webpack what to inject the bundle and subresource integrity information into. Create a file named index.ejs:

<!DOCTYPE html>
<html>
  <body></body>
</html>

Now, create an index.js in the folder with the following script:

// Imports the CSS stylesheet
import './styles.css'
alert('Hello, world!');

Building the bundle

Type npm run build in the terminal. You’ll notice that a folder, called dist is created, and inside of it, a file called index.html that looks something like this:

<!DOCTYPE HTML>
<html><head><script defer src="bundle.js" integrity="sha384-lb0VJ1IzJzMv+OKd0vumouFgE6NzonQeVbRaTYjum4ql38TdmOYfyJ0czw/X1a9b" crossorigin="anonymous">
</script></head>
  <body>
  </body>
</html>

The CSS will be included as part of the bundle.js file.

This will not work for files loaded from external servers, nor should it, as cross-origin files that need to constantly update would break with subresource integrity enabled.

Thanks for reading!

That’s all for this one. Subresource integrity is a simple and effective addition to ensure you’re loading only what you expect and protecting your users; and remember, security is more than just one solution, so always be on the lookout for more ways to keep your website safe.


Securing Your Website With Subresource Integrity originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/securing-your-website-with-subresource-integrity/feed/ 4 342070
Third-Party Components at Their Best https://css-tricks.com/third-party-components-at-their-best/ https://css-tricks.com/third-party-components-at-their-best/#comments Thu, 16 Jan 2020 21:49:35 +0000 https://css-tricks.com/?p=300598 I’m a fan of the componentization of the web. I think it’s a very nice way to build a website at just about any scale (except, perhaps, the absolute most basic). There are no shortage of opinions about what makes …


Third-Party Components at Their Best originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
I’m a fan of the componentization of the web. I think it’s a very nice way to build a website at just about any scale (except, perhaps, the absolute most basic). There are no shortage of opinions about what makes a good component, but say we scope that to third-party for a moment. That is, components that you just use, rather than components that you build yourself as part of your site’s unique setup.

What makes a third-party component good? My favorite attribute of a third-party component is when it takes something hard and makes it easy. Particularly things that recognize and properly handle nuances, or things that you might not even know enough about to get right.

Perhaps you use some component that does pop-up contextual menus for you. It might perform browser edge detection, such as ensuring the menu never appears cut off or off-screen. That’s a tricky little bit of programming that you might not get right if you did it yourself — or even forget to do.

I think of the <Link /> component that React Router has or what’s used on Gatsby sites. It automatically injects aria-current="page" for you on the links when you’re on that page. You can and probably should use that for a styling hook! And you probably would have forgotten to program that if you were handling your own links.

In that same vein, Reach UI Tabs have rigorous accessibility baked into them that you probably wouldn’t get right if you hand-rolled them. This React image component does all sorts of stuff that is relatively difficult to pull off with images, like the complex responsive images syntax, lazy loading, placeholders, etc. This is, in a sense, handing you best practices for “free.”

Here’s a table library that doesn’t even touch UI for you, and instead focuses on other needs you’re likely to have with tables, which is another fascinating approach.

Anyway! Here’s what y’all said when I was asking about this. What makes a third-party component awesome? What do the best of them do? (besides the obvious, like good docs and good accessibility)? Some of these might be at-odds. I’m just listing what people said they like.

  • Plug-and-play. It should “just work” with minimal config.
  • Lots of editable demos
  • Highly configurable
  • “White label” styling. Don’t bring too strong of design choices.
  • Styled via regular CSS so you can BYO own styling tools
  • Fast
  • Small
  • Is installable via a package manager
  • Can be manually instantiated
  • Can be given a DOM node where it can go
  • Follows a useful versioning scheme
  • Is manintained, particularly for security
  • Has a public roadmap
  • Is framework-agnostic
  • Doesn’t have other dependencies
  • Uses intuitive naming conventions
  • Supports internationalization
  • Has lots of tests

Anything you’d add to that list?


Third-Party Components at Their Best originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/third-party-components-at-their-best/feed/ 3 300598
Ten-Ton Widgets https://css-tricks.com/ten-ton-widgets/ https://css-tricks.com/ten-ton-widgets/#comments Tue, 15 Oct 2019 19:59:22 +0000 https://css-tricks.com/?p=297213 At a recent conference talk (sorry, I forget which one), there was a quick example of poor web performance in the form of a third-party widget. The example showed a site that installed the widget in order to add a …


Ten-Ton Widgets originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
At a recent conference talk (sorry, I forget which one), there was a quick example of poor web performance in the form of a third-party widget. The example showed a site that installed the widget in order to add a “email us” button fixed to the bottom right of the viewport. Not even a live-chat widget — just an email thing. It weighed in at something like 470KB, which is straight bananas.

Just in case you are someone who feels trapped into using a ten-ton widget because you aren’t yet sure how to replicate the same functionality, I’ve prepared a small chunk of HTML for you.

It’s 602 Bytes, or about 1/10th of 1% of the size of that other widget — with nothing to download, parse or render block.

See the Pen
Mailto Fixed Postion Widget
by Chris Coyier (@chriscoyier)
on CodePen.

Perhaps on your own site, you’d move the styles out to your stylesheet and add in some hover and focus styles.

It’s not that third-party JavaScript has to be bad. It just has a habit of being bad.

My friend Richard showed me a new product he built called Surfacer. It’s like an RSS widget that you can put anywhere.

See the Pen
Surfacer
by Chris Coyier (@chriscoyier)
on CodePen.

That’s a 773 Byte JavaScript file that does a 3.5KB Ajax request for data, and you’d place it at the end of the document to avoid any render-blocking. It would be nice to see more of this kind of thing.


Ten-Ton Widgets originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/ten-ton-widgets/feed/ 10 297213
Weekly Platform News: Impact of Third-Party Code, Passive Mixed Content, Countries with the Slowest Connections https://css-tricks.com/weekly-platform-news-impact-of-third-party-code-passive-mixed-content-countries-with-the-slowest-connections/ Thu, 10 Oct 2019 17:32:18 +0000 https://css-tricks.com/?p=297145 In this week’s roundup, Lighthouse sheds light on third-party scripts, insecure resources will get blocked on secure sites, and many country connection speeds are still trying to catch up to others… literally.


Measure the impact of third-party code during page…


Weekly Platform News: Impact of Third-Party Code, Passive Mixed Content, Countries with the Slowest Connections originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
In this week’s roundup, Lighthouse sheds light on third-party scripts, insecure resources will get blocked on secure sites, and many country connection speeds are still trying to catch up to others… literally.


Measure the impact of third-party code during page load

Lighthouse, Chrome’s built-in auditing tool, now shows a warning when the impact of third-party code on page load performance is too high. The pre-existing “Third-party usage” diagnostic audit will now fail if the total main-thread blocking time caused by third-parties is larger than 250ms during page load.

Note: This feature was added in Lighthouse version 5.3.0, which is currently available in Chrome Canary.

(via Patrick Hulce)

Passive mixed content is coming to an end

Currently, browsers still allow web pages loaded over a secure connection (HTTPS) to load images, videos, and audio over an insecure connection. Such insecurely-loaded resources on securely-loaded pages are known as “passive mixed content,” and they represent a security and privacy risk.

An insecurely-loaded image can allow an attacker to communicate incorrect information to the user (e.g., a fabricated stock chart), mutate client-side state (e.g., set a cookie), or induce the user to take an unintended action (e.g., changing the label on a button).

Starting next February, Chrome will auto-upgrade all passive mixed content to https:, and resources that fail to load over https: will be blocked. According to data from Chrome Beta, auto-upgrade currently fails for about 30% of image loads.

(via Emily Stark)

Fast connections are still not common in many countries

Data from Chrome UX Report shows that there are still many countries and territories in the world where most people access the Internet over a 3G or slower connection. (This includes a number of small island nations that are not visible on this map.)

(via Paul Calvano)

More news…

Read even more news in my weekly Sunday issue that can be delivered to you via email every Monday morning.

More News →


Weekly Platform News: Impact of Third-Party Code, Passive Mixed Content, Countries with the Slowest Connections originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
297145
Integrating Third-Party Animation Libraries to a Project https://css-tricks.com/integrating-third-party-animation-libraries-to-a-project/ Tue, 14 May 2019 14:28:16 +0000 http://css-tricks.com/?p=287346 Creating CSS-based animations and transitions can be a challenge. They can be complex and time-consuming. Need to move forward with a project with little time to tweak the perfect transition? Consider a third-party CSS animation library with ready-to-go animations waiting …


Integrating Third-Party Animation Libraries to a Project originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Creating CSS-based animations and transitions can be a challenge. They can be complex and time-consuming. Need to move forward with a project with little time to tweak the perfect transition? Consider a third-party CSS animation library with ready-to-go animations waiting to be used. Yet, you might be thinking: What are they? What do they offer? How do I use them?

Well, let’s find out.

A (sort of) brief history of :hover

Once there was a time that the concept of a hover state was a trivial example of what is offered today. In fact, the idea of having a reaction to the cursor passing on top of an element was more-or-less nonexistent. Different ways to provide this feature were proposed and implemented. This small feature, in a way, opened the door to the idea of CSS being capable of animations for elements on the page. Over time, the increasing complexity possible with these features have led to CSS animation libraries.

Macromedia’s Dreamweaver was introduced in December 1997 and offered what was a simple feature, an image swap on hover. This feature was implemented with a JavaScript function that would be embedded in the HTML by the editor. This function was named MM_swapImage() and has become a bit of web design folklore. It was an easy script to use, even outside of Dreamweaver, and it’s popularity has resulted in it still being in use even today. In my initial research for this article, I found a question pertaining to this function from 2018 on Adobe’s Dreamweaver (Adobe acquired Macromedia in 2005) help forum.

The JavaScript function would swap an image with another image through changing the src attribute based on mouseover and mouseout events. When implemented, it looked something like this:

<a href="#" onMouseOut="MM_swapImgRestore()" onMouseOver="MM_swapImage('ImageName','','newImage.jpg',1)">
  <img src="originalImage.jpg" name="ImageName" width="100" height="100" border="0">
</a>

By today’s standards, it would be fairly easy to accomplish this with JavaScript and many of us could practically do this in our sleep. But consider that JavaScript was still this new scripting language at the time (created in 1995) and sometimes looked and behaved differently from browser to browser. Creating cross-browser JavaScript was not always an easy task and not everyone creating web pages even wrote JavaScript. (Though that has certainly changed.) Dreamweaver offered this functionality through a menu in the editor and the web designer didn’t even need to write the JavaScript. It was based around a set of “behaviors” that could be selected from a list of different options. These options could be filtered by a set of targeted browsers; 3.0 browsers, 4.0 browsers, IE 3.0, IE 4.0, Netscape 3.0, Netscape 4.0. Ah, the good old days.

A screenshot of a Netscape browser window.
Choosing Behaviors based on browser versions, circa 1997.
A screenshot from the Dreamweaver application that shows the options panel for toggling an element's behavior in HTML.
The Swap Image Behaviors panel in Macromedia Dreamweaver 1.2a

About a year after Dreamweaver was first released, the CSS2 specification from W3C mentioned :hover in a working draft dated January 1998. It was specifically mentioned in terms of anchor links, but the language suggests it could have possibly been applied to other elements. For most purposes it would seem this pseudo selector would be the beginning of an easy alternative to MM_swapImage(), since background-image was in the same draft. Although browser support was an issue as it took years before enough browsers properly supported CSS2 to make it a viable option for many web designers. There was finally a W3C recommendation of CSS2.1, this could be considered to be the basis of “modern” CSS as we know it, which was published in June 2011.

In the middle of all this, jQuery came along in 2006. Thankfully, jQuery went a long way in simplifying JavaScript among the different browsers. One thing of interest for our story, the first version of jQuery offered the animate() method. With this method, you could animate CSS properties on any element at any time; not just on hover. By its sheer popularity, this method exposed the need for a more robust CSS solution baked into the browser — a solution that wouldn’t require a JavaScript library that was not always very performant due to browser limitations.

The :hover pseudo-class only offered a hard swap from one state to another with no support for a smooth transition. Nor could it animate changes in elements outside of something as basic as hovering over an element. jQuery’s animate() method offered those features. It paved the way and there was no going back. As things go in the dynamic world of web development, a working draft for solving this was well underway before the recommendation of CSS2.1 was published. The first working draft for CSS Transitions Module Level 3 was first published by the W3C in March 2009. The first working draft for CSS Animations Module Level 3 was published at roughly the same time. Both of these CSS modules are still in a working draft status as of October 2018, but of course, we are already making heavy use of them

So, what first started as a JavaScript function provided by a third-party, just for a simple hover state, has led to transitions and animations in CSS that allow for elaborate and complex animations — complexity that many developers wouldn’t necessarily wish to consider as they need to move quickly on new projects. We have gone full circle; today many third-party CSS animation libraries have been created to offset this complexity.

Three different types of third-party animation libraries

We are in this new world capable of powerful, exciting, and complex animations in our web pages and apps. Several different ideas have come to the forefront on how to approach these new tasks. It’s not that one approach is better than any other; indeed, there is a good bit of overlap in each. The difference is more about how we implement and write code for them. Some are full-blown JavaScript-only libraries while others are CSS-only collections.

JavaScript libraries

Libraries that operate solely through JavaScript often offer capabilities beyond what common CSS animations provide. Usually, there is overlap as the libraries may actually use CSS features as part of their engine, but that would be abstracted away in favor of the API. Examples of such libraries are Greensock and Anime.js. You can see the extent of what they offer by looking at the demos they provide (Greensock has a nice collection over on CodePen). They’re mostly intended for highly complex animations, but can be useful for more basic animations as well.

JavaScript and CSS libraries

There are third-party libraries that primarily include CSS classes but provide some JavaScript for easy use of the classes in your projects. One library, Micron.js, provides both a JavaScript API and data attributes that can be used on elements. This type of library allows for easy use of pre-built animations that you can just select from. Another library, Motion UI, is intended to be used with a JavaScript framework. Although, it also works on a similar notion of a mixture of a JavaScript API, pre-built classes, and data attributes. These types of libraries provide pre-built animations and an easy way to wire them up.

CSS libraries

The third kind of library is CSS-only. Typically, this is just a CSS file that you load via a link tag in your HTML. You then apply and remove specific CSS classes to make use of the provided animations. Two examples of this type of library are Animate.css and Animista. That said, there are even major differences between these two particular libraries. Animate.css is a total CSS package while Animista provides a slick interface to choose the animations you want with provided code. These libraries are often easy to implement but you have to write code to make use of them. These are the type of libraries this article will focus on.

Three different types of CSS animations

Yes, there’s a pattern; the rule of threes is everywhere, after all.

In most cases, there are three types of animations to consider when making use of third-party libraries. Each type suits a different purpose and has different ways to make use of them.

Hover animations

An illustration of a black button on the left and an orange button with a mouse cursor over it as a hover effect.

These animations are intended to be involved in some sort of hover state. They’re often used with buttons, but another possibility is using them to highlight sections the cursor happens to be on. They can also be used for focus states.

Attention animations

An illustration of a webpage with gray boxes and a red alert at the top of the screen to show an instance of an element that seeks attention.

These animations are intended to be used on elements that are normally outside of the visual center of the person viewing the page. An animation is applied to a section of the display that needs attention. Such animations could be subtle in nature for things that need eventual attention but not dire in nature. They could also be highly distracting for when immediate attention is required.

Transition animations

An illustration of concentric circles stacked vertically going from gray to black in ascending order.

These animations are often intended to have an element replace another in the view, but can be used for one element as well. These will usually include an animation for “leaving” the view and mirror animation for “entering” the view. Think of fading out and fading in. This is commonly needed in single page apps as one section of data would transition to another set of data, for example.

So, let’s go over examples of each of these type of animations and how one might use them.

Let’s hover it up!

Some libraries may already be set for hover effects, while some have hover states as their main purpose. One such library is Hover.css, which is a drop-in solution that provides a nice range of hover effects applied via class names. Sometimes, though, we want to make use of an animation in a library that doesn’t directly support the :hover pseudo-class because that might conflict with global styles.

For this example, I shall use the tada animation that Animate.css provides. It’s intended more as an attention seeker, but it will nicely suffice for this example. If you were to look through the CSS of the library, you’ll find that there’s no :hover pseudo-class to be found. So, we’ll have to make it work in that manner on our own.

The tada class by itself is simply this:

.tada {
  animation-name: tada;
}

A low-lift approach to make this react to a hover state is to make our own local copy of the class, but extend it just a bit. Normally, Animate.css is a drop-in solution, so we won’t necessarily have the option to edit the original CSS file; although you could have your own local copy of the file if you wish. Therefore, we only create the code we require to be different and let the library handle the rest.

.tada-hover:hover {
  animation-name: tada;
}

We probably shouldn’t override the original class name in case we actually want to use it elsewhere. So, instead, we make a variation that we can place the :hover pseudo-class on the selector. Now we just use the library’s required animated class along with our custom tada-hover class to an element and it will play that animation on hover.

If you wouldn’t want to create a custom class in this way, but prefer a JavaScript solution instead, there’s a relatively easy way to handle that. Oddly enough, it’s a similar method to the MM_imageSwap() method from Dreamweaver we discussed earlier.

// Let's select elements with ID #js_example
var js_example = document.querySelector('#js_example');

// When elements with ID #js_example are hovered...
js_example.addEventListener('mouseover', function () {
  // ...let's add two classes to the element: animated and tada...
  this.classList.add('animated', 'tada');
});
// ...then remove those classes when the mouse is not on the element.
js_example.addEventListener('mouseout', function () {
  this.classList.remove('animated', 'tada');
});

There are actually multiple ways to handle this, depending on the context. Here, we create some event listeners to wait for the mouse-over and mouse-out events. These listeners then apply and remove the library’s animated and tada classes as needed. As you can see, extending a third-party library just a bit to suit our needs can be accomplished in relatively easy fashion.

Can I please have your attention?

Another type of animation that third-party libraries can assist with are attention seekers. These animations are useful for when you wish to draw attention to an element or section of the page. Some examples of this could be notifications or unfilled required form inputs. These animations can be subtle or direct. Subtle for when something needs eventual attention but does not need to be resolved immediately. Direct for when something needs resolution now.

Some libraries have such animations as part of the whole package, while some are built specifically for this purpose. Both Animate.css and Animista have attention seeking animations, but they are not the main purpose for those libraries. An example of a library built for this purpose would be CSShake. Which library to use depends on the needs of the project and how much time you wish to invest in implementing them. For example, CSShake is ready to go with little trouble on your part — simply apply classes as needed. Although, if you were already using a library such as Animate.css, then you’re likely not going to want to introduce a second library (for performance, reliance on dependencies, and such).

So, a library such as Animate.css can be used but needs a little more setup. The library’s GitHub page has examples of how to go about doing this. Depending on the needs of a project, implementing these animations as attention seekers is rather straightforward.

For a subtle type of animation, we could have one that just repeats a set number of times and stops. This usually involves adding the library’s classes, applying an animation iteration property to CSS, and waiting for the animation end event to clear the library’s classes.

Here’s a simple example that follows the same pattern we looked at earlier for hover states:

var pulse = document.querySelector('#pulse');

function playPulse () {
  pulse.classList.add('animated', 'pulse');
}

pulse.addEventListener('animationend', function () {
  pulse.classList.remove('animated', 'pulse');
});

playPulse();

The library classes are applied when the playPulse function is called. There’s an event listener for the animationend event that will remove the library’s classes. Normally, this would only play once, but you might want to repeat multiple times before stopping. Animate.css doesn’t provide a class for this, but it’s easy enough to apply a CSS property for our element to handle this.

#pulse {
  animation-iteration-count: 3; /* Stop after three times */
}

This way, the animation will play three times before stopping. If we needed to stop the animation sooner, we can manually remove the library classes outside of the animationend function. The library’s documentation actually provides an example of a reusable function for applying the classes that removes them after the animation; very similar to the above code. It would even be rather easy to extend it to apply the iteration count to the element.

For a more direct approach, let’s say an infinite animation that won’t stop until after some sort of user interaction takes place. Let’s pretend that clicking the element is what starts the animation and clicking again stops it. Keep in mind that however you wish to start and stop the animation is up to you.

var bounce = document.querySelector('#bounce');

bounce.addEventListener('click', function () {
  if (!bounce.classList.contains('animated')) {
    bounce.classList.add('animated', 'bounce', 'infinite');
  } else {
    bounce.classList.remove('animated', 'bounce', 'infinite');
  }
});

Simple enough. Clicking the element tests if the library’s “animated” class has been applied. If it has not, we apply the library classes so it starts the animation. If it has the classes, we remove them to stop the animation. Notice that infinite class on the end of the classList. Thankfully, Animate.css provides this for us out-of-the-box. If your library of choice doesn’t offer such a class, then this is what you need in your CSS:

#bounce {
  animation-iteration-count: infinite;
}

Here’s a demo showing how this code behaves:

See the Pen
3rd Party Animation Libraries: Attention Seekers
by Travis Almand (@talmand)
on CodePen.

Moving stuff out of the way

When researching for this article, I found that transitions (not to be confused with CSS transitions) are easily the most common type of animations in the third-party libraries. These are simple animations that are built to allow an element to enter or leave the page. A very common pattern seen in modern Single Page Applications is to have one element leave the page while another replaces it by entering the page. Think of the first element fading out and the second fading in. This could be replacing old data with new data, moving to the next panel in a sequence, or moving from one page to another with a router. Both Sarah Drasner and Georgy Marchuk have excellent examples of these types of transitions.

For the most part, animation libraries will not provide the means to remove and add elements during the transition animations. The libraries that provide additional JavaScript may actually have this functionality, but since most do not, we’ll discuss how to handle this functionality now.

Inserting a single element

For this example, we’ll again use Animate.css as our library. In this case, I’ll be using the fadeInDown animation.

Now, please keep in mind there are many ways to handle inserting an element into the DOM and I don’t wish to cover them here. I’ll just be showing how to leverage an animation to make the insertion nicer and more natural than the element simply popping into view. For Animate.css (and likely many other libraries), inserting an element with the animation is quite easy.

let insertElement = document.createElement('div');
insertElement.innerText = 'this div is inserted';
insertElement.classList.add('animated', 'fadeInDown');

insertElement.addEventListener('animationend', function () {
  this.classList.remove('animated', 'fadeInDown');
});

document.body.append(insertElement);

However you decide to create the element doesn’t much matter; you just need to be sure the needed classes are already applied before inserting the element. Then it’ll nicely animate into place. I also included an event listener for the animationend event that removes the classes. As usual, there are several ways to go about doing this and this is likely the most direct way to handle it. The reason for removing the classes is to make it easier to reverse the process if we wish. We wouldn’t want the entering animation competing with a leaving animation.

Removing a single element

Removing a single element is sort of similar to inserting an element. The element already exists, so we just apply the desired animation classes. Then at the animationend event we remove the element from the DOM. In this example, we’ll use the fadeOutDown animation from Animate.css because it works nicely with the fadeInDown animation.

let removeElement = document.querySelector('#removeElement');

removeElement.addEventListener('animationend', function () {
  this.remove();
});

removeElement.classList.add('animated', 'fadeOutDown');

As you can see, we target the element, add the classes, and remove the element at the end of the animation.

An issue with all this is that with inserting and removing elements this way will cause the other elements on the page to jump around to adjust. You’ll have to account for that in some way, most likely with CSS and the layout of the page to keep a constant space for the elements.

Get out of my way, I’m coming through!

Now we are going to swap two elements, one leaving while another enters. There are several ways of handling this, but I’ll provide an example that’s essentially combining the previous two examples.

See the Pen
3rd Party Animation Libraries: Transitioning Two Elements
by Travis Almand (@talmand)
on CodePen.

I’ll go over the JavaScript in parts to explain how it works. First, we cache a reference to a button and the container for the two elements. Then, we create two boxes that’ll be swapped inside the container.

let button = document.querySelector('button');
let container = document.querySelector('.container');
let box1 = document.createElement('div');
let box2 = document.createElement('div');

I have a generic function for removing the animation classes for each animationEnd event.

let removeClasses = function () {
  box1.classList.remove('animated', 'fadeOutRight', 'fadeInLeft');
  box2.classList.remove('animated', 'fadeOutRight', 'fadeInLeft');
}

The next function is the bulk of the swapping functionality. First, we determine the current box being displayed. Based on that, we can deduce the leaving and entering elements. The leaving element gets the event listener that called the switchElements function removed first so we don’t find ourselves in an animation loop. Then, we remove the leaving element from the container since its animation has finished. Next, we add the animation classes to the entering element and append it to the container so it’ll animate into place.

let switchElements = function () {
  let currentElement = document.querySelector('.container .box');
  let leaveElement = currentElement.classList.contains('box1') ? box1 : box2;
  let enterElement = leaveElement === box1 ? box2 : box1;
  
  leaveElement.removeEventListener('animationend', switchElements);
  leaveElement.remove();
  enterElement.classList.add('animated', 'fadeInLeft');
  container.append(enterElement);
}

We need to do some general setup for the two boxes. Plus, we append the first box to the container.

box1.classList.add('box', 'box1');
box1.addEventListener('animationend', removeClasses);
box2.classList.add('box', 'box2');
box2.addEventListener('animationend', removeClasses);

container.appendChild(box1);

Finally, we have a click event listener for our button that does the toggling. How these sequences of events are started is technically up to you. For this example, I decided on a simple button click. I figure out which box is currently being displayed, which will be leaving, to apply the appropriate classes to make it animate out. Then I apply an event listener for the animationEnd event that calls the switchElements function shown above that handles the actual swap.

button.addEventListener('click', function () {
  let currentElement = document.querySelector('.container .box');
  
  if (currentElement.classList.contains('box1')) {
    box1.classList.add('animated', 'fadeOutRight');
    box1.addEventListener('animationend', switchElements);
  } else {
    box2.classList.add('animated', 'fadeOutRight');
    box2.addEventListener('animationend', switchElements);
  }
}

One obvious issue with this example is that it is extremely hard-coded for this one situation. Although, it can be easily extended and adjusted for different situations. So, the example is useful in terms of understanding one way of handling such a task. Thankfully, some animation libraries, like MotionUI, provide some JavaScript to help with element transitions. Another thing to consider is that some JavaScript frameworks, such as VueJS have functionality to assist with animating element transitions.

I have also created another example that provides a more flexible system. It consists of a container that stores references to leave and enter animations with data attributes. The container holds two elements that will switch places on command. The way this example is built is that the animations are easily changed in the data attributes via JavaScript. I also have two containers in the demo; one using Animate.css and the other using Animista for animations. It’s a large example, so I won’t examine code here; but it is heavily commented, so take a look if it is of interest.

See the Pen
3rd Party Animation Libraries: Custom Transition Example
by Travis Almand (@talmand)
on CodePen.

Take a moment to consider…

Does everyone actually want to see all these animations? Some people could consider our animations over-the-top and unnecessary, but for some, they can actually cause problems. Some time ago, WebKit introduced the prefers-reduced-motion media query to assist with possible Vestibular Spectrum Disorder issues. Eric Bailey also posted a nice introduction to the media query, as well as a follow-up with considerations for best practices. Definitely read these.

So, does your animation library of choice support the prefers-reduced-motion? If the documentation doesn’t say that it does, then you may have to assume it does not. Although, it is rather easy to check the code of the library to see if there is anything for the media query. For instance, Animate.css has it in the _base.scss partial file.

@media (print), (prefers-reduced-motion) {
  .animated {
    animation: unset !important;
    transition: none !important;
  }
}

This bit of code also provides an excellent example of how to do this for yourself if the library doesn’t support it. If the library has a common class it uses — like Animate.css uses “animated” — then you can just target that class. If it does not support such a class then you’ll have to target the actual animation class or create your own custom class for that purpose.

.scale-up-center {
  animation: scale-up-center 0.4s cubic-bezier(0.390, 0.575, 0.565, 1.000) both;
}

@keyframes scale-up-center {
  0% { transform: scale(0.5); }
  100% { transform: scale(1); }
}

@media (print), (prefers-reduced-motion) {
  .scale-up-center {
    animation: unset !important;
    transition: none !important;
  }
}

As you can see, I just used the example as provided by Animate.css and targeted the animation class from Animista. Keep in mind that you’ll have to repeat this for every animation class you choose to use from the library. Although, in Eric’s follow-up piece, he suggests treating all animations as progressive enhancement and that could be one way to both reduce code and make a more accessible user experience.

Let a framework do the heavy lifting for you

In many ways, the various frameworks such as React and Vue can make using third-party CSS animation easier than with vanilla JavaScript, mainly because you don’t have to wire up the class swaps or animationend events manually. You can leverage the functionality the frameworks already provide. The beauty of using frameworks is that they also provide several different ways of handling these animations depending on the needs of the project. The examples below is only a small example of options.

Hover effects

For hover effects, I would suggest setting them up with CSS (as I suggested above) as the better way to go. If you really need a JavaScript solution in a framework, such as Vue, it would be something like this:

<button @mouseover="over($event, 'tada')" @mouseleave="leave($event, 'tada')">
  tada
</button>
methods: {
  over: function(e, type) {
    e.target.classList.add('animated', type);
  },
  leave: function (e, type) {
    e.target.classList.remove('animated', type);
  }
}

Not really that much different than the vanilla JavaScript solution above. Also, as before, there are many ways of handling this.

Attention seekers

Setting up the attention seekers is actually even easier. In this case, we’re just applying the classes we require, again, using Vue as an example:

<div :class="{animated: isPulse, pulse: isPulse}">pulse</div>

<div :class="[{animated: isBounce, bounce: isBounce}, 'infinite']">bounce</div>

In the pulse example, whenever the boolean isPulse is true, the two classes are applied. In the bounce example, whenever the boolean isBounce is true the animated and bounce classes are applied. The infinite class is applied regardless so we can have our never-ending bounce until the isBounce boolean goes back to false.

Transitions

Thankfully, Vue’s transition component provides an easy way to use third-party animation classes with custom transition classes. Other libraries, such as React, could offer similar features or add-ons. To make use of the animation classes in Vue, we just implement them in the transition component.

<transition
  enter-active-class="animated fadeInDown"
  leave-active-class="animated fadeOutDown"
  mode="out-in"
>
  <div v-if="toggle" key="if">if example</div>
  <div v-else key="else">else example</div>
</transition>

Using Animate.css, we merely apply the necessary classes. For enter-active, we apply the required animated class along with fadeInDown. For leave-active, we apply the required animated class along with fadeOutDown. During the transition sequence, these classes are inserted at the appropriate times. Vue handles the inserting and removing of the classes for us.

For a more complex example of using third-party animation libraries in a JavaScript framework, explore this project:

See the Pen
KLKdJy
by Travis Almand (@talmand)
on CodePen.

Join the party!

This is a small taste of the possibilities that await your project as there are many, many third-party CSS animation libraries out there to explore. Some are thorough, eccentric, specific, obnoxious, or just straight-forward. There are libraries for complex JavaScript animations such as Greensock or Anime.js. There are even libraries that will target the characters in an element.

Hopefully all this will inspire you to play with these libraries on the way to creating your own CSS animations.


Integrating Third-Party Animation Libraries to a Project originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
287346
Hard Costs of Third-Party Scripts https://css-tricks.com/hard-costs-of-third-party-scripts/ Mon, 22 Oct 2018 21:15:52 +0000 http://css-tricks.com/?p=277695 Dave Rupert:

Every client I have averages ~30 third-party scripts but discussions about reducing them with stakeholders end in “What if we load them all async?” This is a good rebuttal because there are right and wrong ways to


Hard Costs of Third-Party Scripts originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Dave Rupert:

Every client I have averages ~30 third-party scripts but discussions about reducing them with stakeholders end in “What if we load them all async?” This is a good rebuttal because there are right and wrong ways to load third-party scripts, but there is still a cost, a cost that’s passed on to the user. And that’s what I want to investigate.

Yes, performance is a major concern. But it’s not just the loading time and final weight of those scripts, there are all sorts of concerns. Dave lists privacy, render blocking, fighting for CPU time, fighting for network connection threads, data and battery costs, and more.

Dave’s partner Trent Walton is also deep into thinking about third-party scripts, which he talked about a bit on the latest ShopTalk Show.

Check out Paolo Mioni’s investigation of a single script and the nefarious things it can do.

To Shared LinkPermalink on CSS-Tricks


Hard Costs of Third-Party Scripts originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
277695