Thank You (2020 Edition) | CSS-Tricks

Thank You (2020 Edition) | CSS-Tricks

Heck of a year, eh? Like we do ever year, I’d like to give you a huge thanks for reading CSS-Tricks, and recap the year. More downs than ups, all told. Here at CSS-Tricks, it was more of a wash. Allow me to me share some numbers, milestones, and thoughts with you about our journey of 2020.

Let’s do the basic numbers

The site saw 94m pageviews this year. Last year we lost a smidge of pageviews (from 91m to 90m), so it’s nice to see that number go back up again, setting a new high record. Now I don’t have to tell myself stories like “jeez usage of browser extensions that block Google Analytics must be up.” Hitting 100m pageviews will be a nice milestone some future year. This number, long term, climbs very slowly. It’s a good reminder to me how much time, money, and energy are required to just maintain the traffic to a content site, let alone attempt to drive growth.

I have Cloudflare in front of the site this year. I think that’s a good idea generally, but especially now that they have specific technology to make it extra good. I’m a fan of pushing as much to the edge as we can, and now it’s not only static assets that are CDN-served but the content as well.

I mention that because now I have access to Cloudflare analytics, so I can compare across tools. I can’t see a whole year of data on Cloudflare, but comparing last month’s unique visitors between the two services, I see 6,847,979 unique visitors on Cloudflare compared to 6,125,272 sessions (or 7,449,921 unique page views — I’m not sure which is directly comparable) on Google Analytics. They seem pretty close. Closer than I thought they would be, since Google Analytics requires client-side JavaScript and Cloudflare analytics are, presumably, gathered at the DNS level, and thus not blockable like client-side JavaScript. I’ve turned off the WordPress-powered analytics for now, as having three places for analytics seemed a bit much, although I might flip them back on because, without them, I can’t see top on-site search results, which I definitely like to have.

Traffic that comes from organic search was 77.7% this year, versus 80.6% last year. A 3% swing feels pretty large, yet almost entirely accounted for by a 3% swing from 9% to 12% in “direct” traffic. I have no idea what to make of that, but I suppose I don’t mind a better diversification in sources.

I find these end-of-year looks at the numbers sorta fun, but I’m generally not a big analytics guy. Last year I wrote:

There is a bunch of numbers I just don’t feel like looking at this year. We’ve traditionally done stuff like what countries people are from, what browsers they use (Chrome-dominant), mobile usage (weirdly low), and things like that. This year, I just don’t care. This is a website. It’s for everyone in the world that cares to read it, in whatever country they are in and whatever browser they want to.

I feel even more apathetic toward generalized analytics numbers this year. I think analytics are awesome when you have a question in mind you want an answer to, where the numbers help find that answer. Or for numbers that are obviously important and impactful to your site that you determined for yourself. Just poking around at numbers can fool you into thinking you’re gathering important insights and making considered decisions when you’re kinda just picking your nose.

One question that does interest me is what the most popular content is by traffic (we’ll get to that in a bit). Looking at the most popular content (by actual traffic) gives me a sense of what brings people here. Bringing traffic to the site is a goal. While we don’t generally sell sponsorship/advertising based on page views directly, those numbers matter to sponsors and fairly correlate directly to what we can charge.

Another bit of data I care about is what people search for that bring them to the site. Here’s how that breaks down:

  • Top 10: Various combinations of terms that have to do with flexbox and grid layout
  • Mixed into the top 20: Various alterations of the site’s name

From there, 10-100 are “random specific CSS things.” Beyond 100 is where SVG, JavaScript, design stuff, and CSS are sprinkled into the mix. The 251st ranked search term is the first time React shows up. The insight here is that: (1) our layout guides continue to do very well, (2) a lot of people like to get to the site first, then find what they need, and (3) searches for library-specific content isn’t a particularly common way to land here.

Top posts of the year

Thanks to Jacob, we can look at analytics data based on the year the content was written (and a few other bits of metadata).

Here’s an interesting thing. In 2019, articles written in 2019 generated about 6.3m page views. Those same articles, in 2020, generated 7.3m page views. Neat, huh? The articles drove more traffic as they aged.

Articles written in 2020 generated 12m pageviews. Here’s the top 10:

  1. CSS-Only Carousel
  2. Fluid Width Video (cheat, as this was written a few years ago as a stand-alone page, and I only moved it to the blog in 2020)
  3. How to Create an Animated Countdown Timer With HTML, CSS and JavaScript
  4. A Complete Guide to Links and Buttons
  5. The Hooks of React Router
  6. A Complete Guide to Dark Mode on the Web
  7. Neumorphism and CSS
  8. A Complete Guide to Data Attributes
  9. Why JavaScript is Eating HTML
  10. Front-End Challenges

Interesting backstory on that list. I dug into Google Analytics and created that Top 10 list based on the data it was showing me in a custom report, which Jacob taught me to do. Serendipitously, Jacob emailed me right after that to show me the Top 10 that he calculated, and it was slightly different than mine. Then I went back and re-ran my custom report, and it was slightly different than both the others. Weird! Jacob knew why. When you’re looking at a huge dataset like this in Google Analytics, they will only sample the data used for the report. It will show you a “yellow badge” and tell you what percentage of the data the report is based on. 500,000 sessions is the max, which is only 0.7% of what we needed to look at. That’s low enough that it accounted for the different lists. Jacob had already done some exporting and jiggering of the data to account for that, so the above list is what’s accurate to 100% of all sessions.

The top articles on the entire site from any year:

  1. The Complete Guide to Flexbox
  2. The Complete Guide to Grid
  3. Using SVG
  4. Perfect Full Page Background Images
  5. The Shapes of CSS
  6. Media Queries for Standard Devices
  7. Change Color of SVG on Hover
  8. CSS Triangle
  9. How to Scale SVG
  10. Using @font-face

Nothing from the Almanac made the top 10, but interestingly, right after that, the next 20 are so are heavily sprinkled with random articles from there. All told, the Almanac is about 14.8% of all traffic.

Two other things that I think are very cool that we did with content is:

  1. Published Jay Hoffman’s series on Web History, which include audio adaptations from Jeremy Keith that are served as a podcast.
  2. Published our end-of-year series like we did last year.

One of the many reasons I love being on WordPress is how easy it is to spin up series like these. All we did was toss up a category-specific template file and slapped on a little custom CSS. That gives the posts a cool landing page all to themselves, but are still part of the rest of the “flow” of the site (RSS, search, tags, etc.).


Perhaps the slight increase in traffic was COVID-related? With more people turning to coding as a good option for working from home, maybe there are more people searching for coding help. Who knows.

What we definitely found was that nearly every sponsor we work with, understandably, tightened their belt. Add in advertising plans with us that were reduced or canceled and, as a rough estimate, I’d say we’re down 25% in sponsorship revenue. That would be pretty devastating except for the fact that we try not to keep too many eggs in one basket.

Feels like a good time to mention that if your company is doing well and could benefit from reaching people who build websites, let’s talk sponsorship.

I’m trying to diversify revenue somewhat, even on this site alone. For example…


We’ve been using WooCommerce here to sell a couple of things.

Posters, mainly. A literal physical bit of printed paper sent through the post to you. The posters are unique designs made from the incredible illustrations that Lynn Fisher created for the flexbox and grid guides. We essentially “drop ship” them, meaning they are printed and shipped on-demand by another company. So, you buy them from this site, but another company takes it from there and does all the rest of it. That’s appealing because the amount of work is so low, but there are two major downsides: (1) Customer support for the orders becomes our problem and I’d say ~20% of orders need some kind of support, and (2) the profit margin is fairly slim compared to what it would be if we took on more of the work ourselves.

We also sell MVP Supporter memberships, which are great in that they don’t require much ongoing work. The trick there is just making sure it is valuable enough for people to buy, which we’ll have to work more on over time. For now, you basically get a book, video downloads, and no ads.

Loose math, eCommerce made up 5% of the lost revenue. Long way to go there, but it’s probably worth the journey as my guess is that this kind of revenue is less volatile.

I’m also still optimistic about Web Monetization in the long-term (here’s the last post I wrote on it). But right now, it is not a solution that makes for a significant new revenue stream. Optimistically, it’s something like 0.05% of revenue.

Social media

As far a website traffic driver goes, social media isn’t particularly huge at 2.2% of all traffic (down from 2.3% last year). That’s about where it always is, whether or not we put much effort into it over the course of a year, which is exactly why I try not to spend energy there. What little effort we do expend, 95% of it is toward Twitter. We lean on Jetpack’s social automation features, mostly. It is still cool to have @css as a handle, and we are closing in on half a million followers. You’d think that would be worth something. We’ll have to figure that out someday.

When we hand-write Tweets (rare), those are still the ones with the most potential. I only do that when it feels like something fun to do, because even though they can get the most engagement, the time/value thing still just doesn’t make it worthwhile.

Example hand-written tweet

Most of our tweets are just auto-generated when a new post is published. And we’ve been doing that for so long, I think that’s what the Twitter followers largely expect anyway, and that’s fine. We do have the ability to customize the Tweet before it goes out, which we try to, but usually don’t.

Example Auto-Generated Tweet

The other 5% of effort is Instagram just because it’s kinda fun. I don’t even wanna think about trying to extract direct value from Instagram. Maybe if we had a lot more products for direct sale it would make sense. But for now, just random tips and stuff to hold an audience.

Example Instagram Post


I did 22 screencasts this year. That’s a lot compared to the last many years! I’m not sure if I’ll be as ambitious in 2021, but I suspect I might be, because my setup at my desk is getting pretty good for doing them and my editing skills are slowly improving. I enjoy doing them, and it’s an occasional income stream (my favorite being pairing up with someone from a company and digging into their technology together). Plus, we got that cool new intro for our videos done by dina Amin.

The screencasts are published on the site and to iTunes as a videocast, but the primary place people watch is YouTube. I guess we could consider YouTube “social media” but I find that screencasts are more like “real content” in a way that I don’t with other social media. They are certainly much more time-consuming to produce and I hope more evergreen than a one-off tweet or something.


We hit 81,765 subscribers to the newsletter. On one hand, that’s a respectable number. On the other, it’s far too low considering how gosh darn good it is! I was hoping we’d hit 100k this year, but I didn’t actually do all that much to encourage signups, so that’s on me. I don’t think we missed a single week, so that’s a win, and considering we were at 65,000 last year, that’s still pretty good growth.

Y’all left 4,322 comments on the site this year. That’s down a touch from 4,710, but still decent averaging almost 12 a day.

I rollercoaster emotionally about comments. One day thinking they are too much trouble, requiring too much moderation and time to filter the junk. The vitriol can be so intense (on a site about code, wow) that some days I just wanna turn them off. Other times, I’m glad for the extra insight and corrections. Not to mention, hey, that’s content and content is good. We’ve never not had comments, so, hey, let’s keep ’em for now.

I absolutely always encourage your helpful, insightful, and kind comments, and promise to never publish rude or wrong comments (my call).

The forums completely shut down this year (into “read only” mode), so commenting activity from that didn’t exactly make its way over to the blog area. Closing the forums still feels… weird. I liked having a place to send (especially beginners) to ask questions. But, we just do not have the resources (or business model) to support safe and active forums. So closed they will remain, for now.

Goal review

  • 100k on email list. Fail on that one. That was kind of a moonshot anyway, and we never executed any sort of plan to help get there. For example, we could encourage it on social media more. We could attempt to buy ads elsewhere with a call to action to sign up. We could offer incentives to new subscribers somehow. We might do those things, or we might not. I don’t feel strongly enough right now to make it a goal for next year.
  • Two guides. We crushed this one. We published 9 guides. I consider this stuff our most valuable content. While I don’t want to only do this kind of content (because I think it’s fun to think of CSS-Tricks as a daily newspaper-style site as well), I want to put most of our effort here.
  • Have an obvious focus on how-to referential technical content. I think we did pretty good here. Having this in mind all the time both for ourselves and for guest posts meant making sure we were showcasing how to use tech and less focus on things like guest editorials which are, unfortunately, our least useful content. I’d like to be even stricter on this going forward. We’re so far along in our journey on this site. The expectation people have is that this site has answers for their technical front-end questions, so there is no reason not to lean entirely into that.
  • Get on Gutenberg. We also crushed it here. I think in the first month of the year I had us using Gutenberg on new content, and within a few months after that, we had Gutenberg enabled for all posts. It was work! And we still have a long way to go, as most posts on the site haven’t been “converted” into blocks, which is still not a brainless task. But, I consider it a fantastic success. I think Gutenberg is largely a damn pleasure to work with, making content authoring far more pleasurable, productive, and interesting.

New goals

  • Three guides. I know we did nine this year, but the goal was only two. I actually have ideas for three more, so I’ll make three the goal. Related side goal: I’d like to try to make mini-books out of some of these guides and either sell them individually or make it part of the MVP Supporter subscription.
  • Stay focused on how-to technical content around our strengths. Stuff like useful tips. Technical news with context. Advice on best-practices. I want to reign us in a bit more toward our strengths. HTML, CSS, and JavaScript stuff is high on that list of strengths, but not every framework, serverless technology, or build tool is. I’d like us to be more careful about not publishing things unless we can strongly vouch for it.
  • Complete all missing Almanac entries. There are a good 15-20 Almanac articles that could exist that don’t yet. Like we have place-items in there, but not place-content or place-self. Then there is esoteric stuff, like :current, :past, and :future time-dimensional pseudo-classes which, frankly, I don’t even really understand but are a thing. If you wanna help, please reach out.

Wrapping up

Thank you, again, for being a reader of this site. I hope these little peeks at our business somehow help yours. And I really hope 2021 is better than 2020, for all of us.


Source link

Top 4 Magento Development Trends to Adopt in 2021

Top 4 Magento Development Trends to Adopt in 2021

Top Magento Development Trends for 2021

In order to hit the right chord with your potential customers, it is critically important to dig deep into the latest innovations and keep your Magento 2 store ready by adopting the latest Magento development trends.

Here are the top 5 Magento development trends that you should consider adopting into your Magento 2 eCommerce store in 2021.

1. Live Streaming

Live streaming has recently become a head-turner in marketing strategies, and the reason for this is mainly due to the fact that COVID-19 has pushed most, if not all, interactions with customers online.

Plus, video marketing is already increasing traffic for 87% of eCommerce stores, so adding live video to your Magento 2 store is only going to drive more attention and engagement.

When you communicate with your potential customers in real-time, they feel a sense of inclusivity, as live streaming can get their questions heard and answered.

Apart from this, live streaming also currently has the highest viewer retention rates, which means this medium can be of great help for spreading your brand awareness, announcements, and other important information.

As a result, all this can help you win the trust of potential customers and make more sales.

2. Voice Search

Voice search has already taken the eCommerce industry by storm. 

In recent years, Voice search has contributed a lot towards enhancing customers’ shopping experience in online stores.

According to a recent TechCrunch report, the number of smart speaker users will grow by a whopping 21% in 2021. 

Based on this data, integrating voice search into your Magento 2 eCommerce store can become an important parameter in order for you to stay ahead in the competition. 

3. Smart Search Functionality

There is no point in launching an eCommerce store if people can’t find products through search. People nowadays don’t have the patience to sift through dozens of category pages to find the products they want.

So, it is extremely important for you to provide smart search functionality that allows your customers to easily find their desired products with a simple search query.

In fact, a study also found that 30% of eCommerce store visitors use the site search bar. 

Another study found that 12% of unsuccessful searches lead to potential customers going to a competitor’s online store.

Most importantly, on-site searchers are 216% more likely to turn into paying customers.

The point is, you need to integrate a solid search engine like ElasticSearch into your Magento 2 store. ElasticSearch is one of the most popular and powerful search engines in the world. And with the ElasticSearch Magento 2 Extension, you can effortlessly integrate ElasticSearch into your Magento eCommerce store with all the modern search features and functionalities.

4. Magento PWA

Magento PWA is one of the key Magento development trends. 

Magento recently launched Magento PWA Studio, a set of developer tools that allows Magento 2 store owners to transform their standard Magento 2 websites into Progressive Web Apps.

A PWA is basically a website in the form of a native mobile application. And nowadays, it is considered to be the best option to serve mobile customers, as opposed to developing and launching separate native mobile applications for Android and iOS.

Final Thoughts

The eCommerce industry is reaching new heights with more and more consumers shopping online. 

But, at the same time, more and more retailers are also taking their businesses online by adopting the eCommerce business model.

Therefore, it is critically important for you to adopt the latest Magento development trends if you want to survive and thrive in today’s competitive market.

Adopting these trends will help you gain an edge in your market and come out on top. 

Source link

Example of a dot env file showing variables for a node environment, port, API key and API URL.

Give your Eleventy Site Superpowers with Environment Variabl…

Eleventy is increasing in popularity because it allows us to create nice, simple websites, but also — because it’s so developer-friendly. We can build large-scale, complex projects with it, too. In this tutorial we’re going to demonstrate that expansive capability by putting a together a powerful and human-friendly environment variable solution.

What are environment variables?

Environment variables are handy variables/configuration values that are defined within the environment that your code finds itself in.

For example, say you have a WordPress site: you’re probably going to want to connect to one database on your live site and a different one for your staging and local sites. We can hard-code these values in wp-config.php but a good way of keeping the connection details a secret and making it easier to keep your code in source control, such as Git, is defining these away from your code.

Here’s a standard-edition WordPress wp-config.php snippet with hardcoded values:


define( 'DB_NAME', 'my_cool_db' );
define( 'DB_USER', 'root' );
define( 'DB_PASSWORD', 'root' );
define( 'DB_HOST', 'localhost' );

Using the same example of a wp-config.php file, we can introduce a tool like phpdotenv and change it to something like this instead, and define the values away from the code:


$dotenv = DotenvDotenv::createImmutable(__DIR__);

define( 'DB_NAME', $_ENV['DB_NAME'] );
define( 'DB_USER', $_ENV['DB_USER'] );
define( 'DB_PASSWORD', $_ENV['DB_PASSWORD'] );
define( 'DB_HOST', $_ENV['DB_HOST'] );

A way to define these environment variable values is by using a .env file, which is a text file that is commonly ignored by source control.

We then scoop up those values — which might be unavailable to your code by default, using a tool such as dotenv or phpdotenv. Tools like dotenv are super useful because you could define these variables in an .env file, a Docker script or deploy script and it’ll just work — which is my favorite type of tool!

The reason we tend to ignore these in source control (via .gitignore) is because they often contain secret keys or database connection information. Ideally, you want to keep that away from any remote repository, such as GitHub, to keep details as safe as possible.

Getting started

For this tutorial, I’ve made some starter files to save us all a bit of time. It’s a base, bare-bones Eleventy site with all of the boring bits done for us.

Step one of this tutorial is to download the starter files and unzip them wherever you want to work with them. Once the files are unzipped, open up the folder in your terminal and run npm install. Once you’ve done that, run npm start. When you open your browser at http://localhost:8080, it should look like this:

Also, while we’re setting up: create a new, empty file called .env and add it to the root of your base files folder.

Creating a friendly interface

Environment variables are often really shouty, because we use all caps, which can get irritating. What I prefer to do is create a JavaScript interface that consumes these values and exports them as something human-friendly and namespaced, so you know just by looking at the code that you’re using environment variables.

Let’s take a value like HELLO=hi there, which might be defined in our .env file. To access this, we use process.env.HELLO, which after a few calls, gets a bit tiresome. What if that value is not defined, either? It’s handy to provide a fallback for these scenarios. Using a JavaScript setup, we can do this sort of thing:


module.exports = {
  hello: process.env.HELLO || 'Hello not set, but hi, anyway 👋'

What we are doing here is looking for that environment variable and setting a default value, if needed, using the OR operator (||) to return a value if it’s not defined. Then, in our templates, we can do {{ env.hello }}.

Now that we know how this technique works, let’s make it happen. In our starter files folder, there is a directory called src/_data with an empty env.js file in it. Open it up and add the following code to it:


module.exports = {
    process.env.OTHER_SITE_URL || '',
  hello: process.env.HELLO || 'Hello not set, but hi, anyway 👋'  

Because our data file is called env.js, we can access it in our templates with the env prefix. If we wanted our environment variables to be prefixed with environment, we would change the name of our data file to environment.js . You can read more on the Eleventy documentation.

We’ve got our hello value here and also an otherSiteUrl value which we use to allow people to see the different versions of our site, based on their environment variable configs. This setup uses Eleventy JavaScript Data Files which allow us to run JavaScript and return the output as static data. They even support asynchronous code! These JavaScript Data Files are probably my favorite Eleventy feature.

Now that we have this JavaScript interface set up, let’s head over to our content and implement some variables. Open up src/ and at the bottom of the file, add the following:

Here’s an example: The environment variable, HELLO is currently: “{{ env.hello }}”. This is called with {% raw %}{{ env.hello }}{% endraw %}.

Pretty cool, right? We can use these variables right in our content with Eleventy! Now, when you define or change the value of HELLO in your .env file and restart the npm start task, you’ll see the content update.

Your site should look like this now:

You might be wondering what the heck {% raw %} is. It’s a Nunjucks tag that allows you to define areas that it should ignore. Without it, Nunjucks would try to evaluate the example {{ env.hello }} part.

Modifying image base paths

That first example we did was cool, but let’s really start exploring how this approach can be useful. Often, you will want your production images to be fronted-up with some sort of CDN, but you’ll probably also want your images running locally when you are developing your site. What this means is that to help with performance and varied image format support, we often use a CDN to serve up our images for us and these CDNs will often serve images directly from your site, such as from your /images folder. This is exactly what I do on Piccalilli with ImgIX, but these CDNs don’t have access to the local version of the site. So, being able to switch between CDN and local images is handy.

The solution to this problem is almost trivial with environment variables — especially with Eleventy and dotenv, because if the environment variables are not defined at the point of usage, no errors are thrown.

Open up src/_data/env.js and add the following properties to the object:

imageBase: process.env.IMAGE_BASE || '/images/',
imageProps: process.env.IMAGE_PROPS,

We’re using a default value for imageBase of /images/ so that if IMAGE_BASE is not defined, our local images can be found. We don’t do the same for imageProps because they can be empty unless we need them.

Open up _includes/base.njk and, after the <h1>{{ title }}</h1> bit, add the following:

<img src="" alt="Some lush mountains at sunset" />

By default, this will load /images/mountains.jpg. Cool! Now, open up the .env file and add the following to it:


If you stop Eleventy (Ctrl+C in terminal) and then run npm start again, then view source in your browser, the rendered image should look like this:

<img src="" alt="Some lush mountains at sunset" />

This means we can leverage the CodePen asset optimizations only when we need them.

Powering private and premium content with Eleventy

We can also use environment variables to conditionally render content, based on a mode, such as private mode. This is an important capability for me, personally, because I have an Eleventy Course, and CSS book, both powered by Eleventy that only show premium content to those who have paid for it. There’s all-sorts of tech magic happening behind the scenes with Service Workers and APIs, but core to it all is that content can be conditionally rendered based on env.mode in our JavaScript interface.

Let’s add that to our example now. Open up src/_data/env.js and add the following to the object:

mode: process.env.MODE || 'public'

This setup means that by default, the mode is public. Now, open up src/ and add the following to the bottom of the file:

{% if env.mode === 'private' %}

## This is secret content that only shows if we’re in private mode.

This is called with {% raw %}`{{ env.mode }}`{% endraw %}. This is great for doing special private builds of the site for people that pay for content, for example.

{% endif %}

If you refresh your local version, you won’t be able to see that content that we just added. This is working perfectly for us — especially because we want to protect it. So now, let’s show it, using environment variables. Open up .env and add the following to it:


Now, restart Eleventy and reload the site. You should now see something like this:

You can run this conditional rendering within the template too. For example, you could make all of the page content private and render a paywall instead. An example of that is if you go to my course without a license, you will be presented with a call to action to buy it:

Fun mode

This has hopefully been really useful content for you so far, so let’s expand on what we’ve learned and have some fun with it!

I want to finish by making a “fun mode” which completely alters the design to something more… fun. Open up src/_includes/base.njk, and just before the closing </head> tag, add the following:

{% if env.funMode %}
  <link rel="stylesheet" href="" />
    body {
      font-family: 'Comic Sans MS', cursive;
      background: #fc427b;
      color: #391129;
    .fun {
      font-family: 'Lobster';
    .fun {
      font-size: 2rem;
      max-width: 40rem;
      margin: 0 auto 3rem auto;
      background: #feb7cd;
      border: 2px dotted #fea47f;
      padding: 2rem;
      text-align: center;
{% endif %}

This snippet is looking to see if our funMode environment variable is true and if it is, it’s adding some “fun” CSS.

Still in base.njk, just before the opening <article> tag, add the following code:

{% if env.funMode %}
  <div class="fun">
    <p>🎉 <strong>Fun mode enabled!</strong> 🎉</p>
{% endif %}

This code is using the same logic and rendering a fun banner if funMode is true. Let’s create our environment variable interface for that now. Open up src/_data/env.js and add the following to the exported object:

funMode: process.env.FUN_MODE

If funMode is not defined, it will act as false, because undefined is a falsy value.

Next, open up your .env file and add the following to it:


Now, restart the Eleventy task and reload your browser. It should look like this:

Pretty loud, huh?! Even though this design looks pretty awful (read: rad), I hope it demonstrates how much you can change with this environment setup.

Wrapping up

We’ve created three versions of the same site, running the same code to see all the differences:

  1. Standard site
  2. Private content visible
  3. Fun mode

All of these sites are powered by identical code with the only difference between each site being some environment variables which, for this example, I have defined in my Netlify dashboard.

I hope that this technique will open up all sorts of possibilities for you, using the best static site generator, Eleventy!

Source link

3 Steps to Enable Client Hints on Your Image CDN

3 Steps to Enable Client Hints on Your Image CDN

The goal of Client Hints is to provide a framework for a browser when informing the server about the context in which a web experience is provided.

HTTP Client Hints are a proposed set of HTTP Header Fields for proactive content negotiation in the Hypertext Transfer Protocol. The client can advertise information about itself through these fields so the server can determine which resources should be included in its response.


With that information (or hints), the server can provide optimizations that help to improve the web experience, also known as Content Negotiation. For images, a better web experience means faster loading, less data payload, and a streamlined codebase.  

Client Hints have inherent value, but can be used together with  responsive images syntax to make responsive images less verbose and easier to maintain. With Client Hints, the server side, in this case an image CDN, can resize and optimize the image in real time.

Client Hints have been around for a while – since Chrome 35 in 2015, actually. However, support in most Chrome browsers got partly pulled due to privacy concerns in version 67. As a result, access to Client Hints was limited to certain Chrome versions on Android and first-party origins in other Chrome versions.

Now, finally, Google has enabled Client Hints by default for all devices in Chrome version 84!

Let’s see what’s required to make use of Client Hints.

1) Choose an Image CDN that Supports Client Hints

Not many image CDNs support client hints. Max Firtman did an extensive evaluation of Image CDNs that identified ones that supported client hints. ImageEngine stands out as the best image CDN with full Client Hints support in addition to more advanced features.

ImageEngine works like most CDNs by mapping the origin of the images, typically a web location or an S3 bucket, to a domain name pointing to the CDN address. Sign up for a free trial here. After signing up, trialers will get a dedicated ImageEngine delivery address that looks something like this: The ImageEngine delivery address can also be customized to one’s own domain by creating a CNAME DNS record. 

In the following examples, we will assume that ImageEngine is mapped to in the DNS.

2) Make the Browser Send Client Hints

Now that the trialer has an ImageEngine account with full client hints support, we need to tell the browser to start sending the client hints to ImageEngine. This basically means that the webserver has to reply to a request with two specific HTTP headers. This  can be done manually on one’s website, or for example use a plugin if the site is running WordPress.

How the headers are added manually depends on one’s website:

  • A hosting provider or CDN probably offers a setting to alter http headers, 
  • One can add the headers in the code of their site. How this is done depends on the programming language or framework one is using. Try googling “add http headers <your programming language or framework>”
  • The hosting provider may run apache and allow users to edit the .htaccess configuration file. One can also add the headers in there.
  • Trialers can also add the headers to the markup inside the <head> element using the http-equiv meta element: <meta http-equiv="Accept-CH" content="DPR, Width, Viewport-Width">

The first header is the Accept-CH header. It tells the browser to start sending client hints:

Accept-CH: viewport-width, width, dpr

At the time of writing, the mechanism for delegating Client Hints to 3rd parties is named Feature Policies. However, it’s about to be renamed to Permission Policies.

Then, to make sure the Client Hints are sent along with the image requests to the ImageEngine delivery address obtained in step 1, this feature policy header must be added to server responses as well.

A Feature / Permission policy is a HTTP header specifying which origins (domains) have access to which browser features.

Feature-Policy: ch-viewport-width;ch-width;ch-dpr;ch-device-memory;ch-rtt;ch-ect;ch-downlink must be replaced with the actual address refering to ImageEngine whether it’s the generic or your customized delivery address.

Pitfall 1: Note the ch- prefix. The notation is ch– + client-hint name

Pitfall 2: Use lowercase! Even if docs and examples say, for example, Accept-CH: DPR, make sure to use ch-dpr in the policy header! 

Once the accept-ch and feature-policy header are set, the response from the server will look something like the screen capture above.

3) Set Sizes Attribute

Last, but not least, the <img> elements in the markup must be updated. 

Most important, the src of the <img> element must point to the ImageEngine delivery address. Make sure this is the same address used  in step 1 and mentioned in the feature-policy header in step 2.

Next, add the sizes attribute to the <img> elements. sizes is a part of the responsive images syntax which enable the browser to calculate the specific pixel size an image is displayed at. This size is sent to the image CDN in the width client hint.

<img src="" sizes="200px" width="200px" alt="image">

If the width set in CSS or width attribute is known, one can “retrofit” responsive images by copying that value into sizes.

When these small changes have been made to the <img> element, the request to ImageEngine for images will contain the client hints like illustrated in the screen capture above. The “width” header tells ImageEngine the exact size the image needs to be to fit perfectly on the web page.

Enjoy Pixel-Perfect Images

Now, if tested in a supporting browser, like Chrome version 84 and below, the client hints should be flowing through to

The <img> element is short and concise, and is rigged to provide even better adapted responsive images than a classic client-side implementation without client hints would. Less code, no need to produce multiple sizes of the images on your web server and the resource selection is still made by the browser but served by the image CDN. Best from both worlds!

Trialers can see the plumbing in action in this reference implementation on Make sure to test this in Chrome version 84 or newer!

By using an image CDN like ImageEngine that supports client hints, sites will never serve bigger images than necessary when the steps above are followed. Additionally, as a bonus, ImageEngine will also optimize and convert images between formats like WebP, JPEG2000 and MP4 in addition to the more common image formats.

Additionally, the examples above contain a few network- or connectivity-related Client Hints. ImageEngine may also optimize images according to this information.

What about browsers not supporting Client Hints? ImageEngine will still optimize and resize images thanks to advanced device detection at the CDN edge. This way, all devices and browsers will always get appropriately sized images.

ImageEngine offers a free trial, and anyone can sign up here to start implementing client hints on their website.

Source link

The basic anatomy of a font including an explanation of terms for laying out a typeface.

Web Performance Calendar » A font-display setting for slow c…

In this post I’m going to jot down a few thoughts to a question that has been intriguing me for a while now. “What font-display setting should be used to improve the experience for all users?”. Before I get into that, let’s go over a few of the basics.

Typography on the web

The world wide web is possibly the greatest infrastructure mankind has ever created. Its ability to communicate and convey ideas to a global audience is unprecedented. As the saying goes on the web, “content is king” and the written word makes up a huge percentage of that content. Even non-decorative images should have words in the markup to describe them (alt attributes). The web would be a pretty boring place if it were full of pages filled with paragraph after paragraph of similar text.

Thankfully, the web is a visual medium with many opportunities to convey ideas through the use of design and typography. For years designers were desperate to bring custom typography to the web and break out from the limitation of ‘web-safe’ fonts. First there was the ‘image replacement’ technique, then there was Scalable Inman Flash Replacement (sIFR), next came cufón. But all these techniques had flaws.

Eventually a browser native font loading technique called @font-face started to land in browsers around 2008 (although it first appeared in the CSS2 specification as far back as 1998). This was a great addition to the web, solving many of the issues related to accessibility and maintainability in the older techniques listed above. But @font-face came with its own set of challenges. Primarily the different font formats, and the way they are loaded.

Font formats

The web wouldn’t be the web if there weren’t competing standards between browsers. The same is true for fonts: EOT for Internet Explorer, SVG for older versions of Android, WOFF/WOFF2 for modern browsers. That’s not to mention TTF and OTF fonts which also had varying browser support. In the end the @font-face rule was very flexible. List all the versions available and the browser will choose the first version it supports.

Font loading

A real problem came when the rendering of these fonts was considered. How do you display text on a web page using a custom font, when the font has yet to be downloaded by the browser? For many there was no consistency in how browsers would render text styled with a custom web font. There’s a whole article by Zach Leatherman – ‘A Historical look at FOUT and FOIT‘ dedicated to the subject that is well worth a read. FOIT (Flash of Invisible Text) acts much like a placeholder for the text. It worked in the fact it allowed the page to be rendered. But should a user be on a slow connection (or if the font files were large) a user could be looking at the empty shell of a webpage for many seconds. Users in this position are unable to complete the primary function for visiting the website (reading the content). This leads to a poor experience for users. FOUT (Flash of Unstyled Text) on the other hand is when text is rendered with the default system fonts before the webfont has been loaded. Developers really had no control over this process. Then in January 2016 a new @font-face descriptor was introduced to browsers (behind an experimental flag): font-display.

What is font-display?

So what is the font-display descriptor?

Before I explain, I just want to mention that font-display has no effect on the way in which a font is downloaded from the server by the browser. No matter what this setting, the browser will request the fonts from the server and they will be sent.

But what font-display does control is how a font is presented (or even displayed 🙂 ) to a user during the webfont download phase. By setting a specific value on the @font-face at-rule, the browser will respond differently. It comes with 5 possible values which are explained below:


This is where the browser will choose the font display strategy itself.


The browser will render invisible placeholder text for a short time. Depending on the browser used it can be up to 3 seconds. Beyond 3 seconds if the font hasn’t downloaded the next font in the CSS font stack is used. The browser then has an infinite amount of time in which the fallback font can be swapped out for the webfont (once downloaded).


The browser will render the fallback font almost immediately. Within 100ms or less is recommended in the spec. Just like block, the browser has an infinite amount of time in which to swap out for the webfont once it has downloaded.


The browser will render invisible placeholder text for a very short period of time (~100ms), then (if the webfont hasn’t loaded) it will render the next font in the CSS font stack. The key difference here is that the swap period is only 3 seconds. In other words, if the webfont downloads after the 3 second cut-off, text will never be rerendered. The fallback font in the CSS font stack will continue to be used.


Optional is very aggressive in terms of rendering rules. The browser first displays invisible placeholder text. It is then given 100ms to render the webfont. Beyond the first 100ms the next font in CSS font stack will be used. With optional, once the fallback font has be rendered, the webfont won’t ever be rendered. There is no swap period in optional.

So which font-display setting is best?

So now I’ve gone over some background, this question is what I want to discuss for the rest of this post. And as with many of these types of questions, it really all depends on what you define ‘best’ as.

Fast connections

If you’re purely talking about perceived performance and connection speed isn’t an issue then I’d say font-display: swap is best. When used, this setting renders the fallback font almost instantly. Allowing a user to start reading page content while the webfont continues to load in the background. Once loaded the font is swapped in accordingly. On a fast, stable connection this usually happens very quickly.

‘Not-fast’ connections

But it’s important to remember that not all users are on a fast and stable connections. Some are on slow and unstable connections. So what changes when we consider these users? I’d like to be upfront from the start and say I’m of the opinion that font-display: swap isn’t best for these users. Let me go into the reason why below.

The infinite swap period

The main reason I’m of the opinion that font-display: swap can be bad for users on slow connections is the ‘infinite swap period’. When used it’s basically saying to the browser: “no matter how long it takes for the webfont to load, swap the rendered fallback font for it when it does”.

Not all fonts are created equally

This is an issue because each fonts fundamental metrics may differ. Each font will have a slightly different baseline, x-height, median, and cap height. There’s also the difference in how letters are aligned with each other. As the tracking, leading, and kerning will differ slightly between fonts.

The basic anatomy of a font including an explanation of terms for laying out a typeface.

You also have to factor in human error when fonts are made. Glyphs within a font can sit slightly differently in the fonts ‘bounding box’ compared to other fonts.

Essentially what I’m trying to say is that when two fonts get swapped, you aren’t always going to get a 1-to-1 replacement in terms of size and position for each glyph. What could happen is the content could shift in any direction:

Font shift on

In the example above you see an emulated Moto G4 on The Telegraph. The font is swapping from ‘Georgia’ (its fallback font), to their own custom webfont. As you can see, because of the differences in the font metrics the whole content shifts. This process is often called Flash of Unstyled Text (FOUT). Note It is also sometimes called flash of unstyled content (FOUC).

This issue was predicted and well documented to an incredible amount of detail in the CSS Fonts Module Level 4 specification. Reading through it, the browser goes through 5-6 sets of font matching algorithms to try to minimise this problem. But even with all the automated matching involved, it’s still going to happen with some fonts at certain device widths.

The delayed content shift

Now let’s put ourselves in the shoes of a user on this Moto G4 device. Maybe they are in a rural area trying to catch up on the news. They have loaded ‘’. News websites with lots of third party scripts can be slow at times, so let’s imagine they are using font-display: swap. The fallback font ‘Georgia’ has rendered and they are already reading the article. swap has now given the browser the permission to swap out the fallback font for the webfont, no matter when it loads. For a very slow connection that could be 10-20 seconds later. By this point our user is well into reading the article. The font downloads, the font is swapped, content shifts and they lose where they were in the article. That doesn’t sound like a great experience, in fact it sounds quite frustrating! Especially if it happens on most websites you visit.

Can this shift be detected?

So other than visually, can this shift be detected programmatically e.g. via Cumulative Layout Shift (CLS)? Let’s examine the Chrome ‘Performance’ panel and run a performance audit and see:

Using the Chrome 'Performance' panel to run an audit, the shift from a font load can be seen in DevTools.

This is quite a busy screenshot so I’ll go through it here. Once the performance audit has run Chrome takes a few seconds to collate and display the results. The width of the image shows the browser tasks that happened across part of the page load. In the screenshot I’ve zoomed into two particular frames that we’re interested in. This is the frame where the browser swaps from the fallback font, to the web font (red arrows). You can also see at this point in time Chrome has highlighted in the ‘Experience’ row that a layout shift has occurred. Clicking the red shift rectangle (green arrow) brings up a full summary panel giving you detailed information about this particular shift (all described in the image above).

It’s important to remember that you can dig into a lot more detail about what is happening in the browser at this point in time (e.g. browser main thread) by investigating the other rows in the performance tab. For example: if you are seeing a huge shift at a particular point in time, you could examine exactly what JavaScript is firing and causing it. There’s a whole post dedicated to this tab ‘Get Started With Analyzing Runtime Performance’ if you are interested in knowing more.

Does this actually happen ‘in the wild’?

I know the above user scenario all sounds very contrived, so let’s see if it happens to real users. To find out we’ll pull some data together and do some testing.

The folks over at MLab regularly publish a global dashboard containing heaps of interesting data about connection speeds in different countries. If you drill down into the data it also gives you information about individual towns and villages. So let’s pick out a village from the United Kingdom (where I live): Auchencairn.

Auchencairn is a village in the historical county of Kirkcudbrightshire in the Dumfries and Galloway region of Scotland. The reported median download speed for this village is 0.28 Mbit/s, and a median upload speed of 0.28 Mbit/s (from a sample size of 26 downloads and 23 uploads). Now as you can probably tell, this isn’t a quick connection! It’s actually slightly slower than a 2G connection. Unfortunately for the village the mobile coverage is also quite spotty, with only 2 of the 4 providers only able to cover with basic data (according to Ofcom’s mobile coverage map) which offers up to 3G speeds. This observation is also confirmed by examining the nperf connection data too.

In this case our user from Auchencairn is checking the latest news. So let’s test the Telegraph on this connection speed and see what happens. By using WebPageTest and a Cloudflare Worker we can see how an article page renders under our users connection conditions while varying values of the font-display property for the Telegraphs webfonts. For more information on how this is done, check out ‘Exploring Site Speed Optimisations With WebPageTest and Cloudflare Workers‘ by Andy Davies.

Note: I couldn’t actually get the article page to load in WebPageTest using the connection settings that our user has listed above. After 120 seconds the test timed out! I even tried forcing the test to run longer, but after many minutes I aborted. In the end I decided to increase the connection speed to 0.6 Mbit/s download, 0.6 Mbit/s upload. Once done it loaded. So the speed the tests are running at below are 114% quicker than the one our fictitious user from Auchencairn has!


Telegraph article with font-display: block.

Above we see the filmstrip of how the article loads with font-display: block. Note: Cropped filmstrip starts at 6 seconds.

Important timings:

  • First Paint: 7.902s
  • Page layout with invisible placeholder text: 8.102s
  • Largest Contentful Paint (LCP): 11.002s
  • Webfont swapped in: 15.103s
  • Diff (first text to swap): 4.1s

So let’s examine what is happening here. Pixels are painted to the screen at 7.9 seconds into the page load. The page looks structurally complete 200ms later, but we have no text. As a user we can’t read anything yet. It’s not until 3 seconds later before text is rendered. Then finally 4.1 seconds later the webfont is swapped with the fallback font. 15.1 seconds into the page navigation the page is finally completely stable for our user.


Telegraph article with font-display: swap.

Above we see the filmstrip of how the article loads with font-display: swap. Note: Cropped filmstrip starts at 8 seconds.

Important timings:

  • First Paint: 8.102s
  • Page layout with fallback text (LCP): 8.302s
  • Webfont swapped in: 12.102s
  • Diff (first text to swap): 3.8s

For swap we see the first pixels painted at 8.1 seconds. 200ms later the page structure is complete and the fallback font is rendered. The webfont is finally swapped in 3.8 seconds later (12.1 seconds after first navigation). At this point the page is stable and ready to be read.


Telegraph article with font-display: optional.

Above we see the filmstrip of how the article loads with font-display: optional. Note: Cropped filmstrip starts at 7 seconds.

Important timings:

  • First Paint: 7.702s
  • Page layout with invisible text: never
  • Largest Contentful Paint: 7.802s
  • Webfont swapped in: never
  • Diff (first text to swap): 0s

Optional is one of the simplest results to examine. The first pixels are rendered at 7.7 seconds. 100ms later the page structure is completed and the fallback font is rendered. That’s it! Even though the webfont is downloading in the background it will never be shown during this page’s lifecycle. If the user were to navigate to another page on the site that uses the same font, that’s when they will see the font (since it now exists in the browser cache), but for the current page the font won’t be used.


Telegraph article with font-display: fallback.

Above we see the filmstrip of how the article loads with font-display: fallback. Note: Cropped filmstrip starts at 8 seconds.

Important timings:

  • First Paint: 8.102s
  • Page layout with invisible text: 8.235s
  • Largest Contentful Paint: 8.402s
  • Font swapped in: never
  • Diff (first text to swap): 0s

Examining the above timings, we have the first pixels painted to the screen at 8.1 seconds. Around 100ms later the page structure is completed, but with no text rendered. This state isn’t seen in the filmstrip due to the filmstrip timings used. Approximately 200ms later the fallback font is rendered. Because the webfont takes another 3+ seconds to load, the swap never happens because it is beyond the swap cutoff point. The page is now stable (from a content POV) for the user to read.


Below you will see all of the results from above tests together. Listed are the render time points (in seconds) for each of the font-display values.

Font-display Fallback font (s) Webfont (s) Diff (s)
Block 11 15.1 4.1
Swap 8.3 12.1 3.8
Optional 7.8 n/a 0
Fallback 8.4 n/a 0

The difference between the fallback font rendering and the webfont swapping in is important. It’s worth considering this value when considering webfont usage for people with a poor connection (and / or) older devices. Now 4 seconds may not sound like much, but remember the results above are from a connection that is 114% quicker than our fictional user in Auchencairn! The true result is likely to be a lot worse on an even slower connection and device. Also, both block and swap are outside of the 3 second timeout that most browser vendors have adopted (as seen in the specification). So 3 seconds seems like a reasonable time to aim for as a maximum time for font swapping.

Warning: It can get so much worse!

It’s worth mentioning that results can get so much worse quickly for users on a very slow connection when Cross-Origin Resource Sharing (CORS) rears its ugly head. In setting up the tests to modify the font-display setting I happened to point the modified CSS at the incorrect URLs for the font files. So instead of the fonts being served from the same domain as the HTML, they were now considered ‘Cross-Origin’. But the server wasn’t set up to send an Access-Control-Allow-Origin header (either with the * or the HTML’s domain).

These missing headers trigger the browser to send preflight requests for each of the fonts. Preflight requests are very small OPTIONS method requests of approximately 2 KB in size. The preflights allow the server to see details of the font request before it decides if the browser should send the request for the actual fonts. In our case the browser is essentially asking the server permission to be allowed to send the actual font requests. These preflight requests can be seen in the waterfall below:

Explanation as to what is happening with the preflights in the waterfall chart.

There’s a lot going on in this waterfall so let’s step through it. Requests 15-18 are the preflights sent from the browser to the server. Due to the limited and maxed out bandwidth these take 8 seconds to complete. What makes matters worse is there’s a whole TCP connection negotiation setup in there too. Only when the browser has received permission back from the server will it send the actual font requests. These then take another 2-3 seconds to completely download. During this time the browser is almost idle, waiting for something to do.

This is understandable, but what I didn’t expect is the impact this has on font rendering. If we again refer to the CSS Fonts specification for font-display: fallback:

Gives the font face an extremely small block period (100ms or less is recommended in most cases) and a short swap period (3s is recommended in most cases).

So I’d expect a short 100ms delay before the fallback font is rendered and the user can start reading the content. But in reality this doesn’t happen (in Chrome at least) as you will see below:

A preflight request adds seconds onto the font render time for fallback

Note: Compressed filmstrip starts at 7.5 seconds.

Important timings:

  • First Paint: 7.802s
  • Page layout with invisible text: 7.902s
  • Largest Contentful Paint: 15.903s
  • Font swapped out: never

Here we have 8 seconds between the first pixels being rendered to the first text actually showing up on the page. That’s a pretty huge chunk of time for a user to wait. If you compare the CORS broken vs CORS fixed filmstrips the difference is quite obvious:

With the correct CORS settings a user can start reading the article at 9 seconds. When broken it adds another 7 seconds before the font is rendered.

Note: Cropped filmstrip starts at 7.5 seconds.

Interestingly this only happens with font-display: fallback and font-display: block. font-display: swap and font-display: optional under the same conditions render the fallback font at ~8 seconds. I believe this is because both these settings have a ‘block’ period in their loading timeline, and the specification says:

The first period is the font block period. During this period, if the font face is not loaded, any element attempting to use it must instead render with an invisible fallback font face.

But I may be completely wrong. If anyone has any thoughts or information on this please do let me know!

So the key takeaway from all this is make sure you set the correct Access-Control-Allow-Origin headers if you are serving fonts ‘Cross-Origin’. If you don’t the CORS preflights will essentially be adding a large TTFB (Time to First Byte) onto your font requests.

What can be done to minimise the shift?

So what can you do to minimise the shifts seen in these extreme cases? Well, the obvious solution (that designers will hate) is ask yourself if you need the webfont at all? Is there an alternative system font that is suitable, that matches closely with the design? Not going to fly with the design team? Okay then, read on:

Font matching

You could consider having a play around with Font style matcher. A tool designed to help you match a webfont’s x-heights and widths with the fallback font. With this tool it should be easier to find a set of closely matching fonts. So consider changing the CSS font stack if your current fallback and webfont are vastly different in terms of style and metrics. This is even advised in the specification:

Authors are advised to use fallback fonts in their font lists that closely match the metrics of the downloadable fonts to avoid large page reflows where possible.

In the future there should be a whole new set of font metric settings to play with in the @font-face rule (ascent-override, descent-override and line-gap-override). These now exist in the CSS Fonts Module Level 4 specification, but from what I can find no browser supports them yet.

Font load optimisation

Font matching may only get you so far depending on the fonts used. Other tactics you could employ are to reduce the font size so it downloads quicker, and optimise how your fonts are being served. I won’t go into the details here as there have been many (many!) blog posts written all about these subjects. But here are a few links to get you started:


So in this blog post we’ve covered a lot! From some of the basics of typography, through font formats all the way to font shifting, the font-display setting and its impact it can have on users with very slow connections.

If there’s one thing I’d like readers to take away from this post it’s that font-display: swap is a very good option for users with a fast internet connection. But its infinite swap period could be frustrating for users on very slow and unstable connections. If you have users viewing your site under these conditions (I’m pretty certain you will at some point in time), then it may be worth considering font-display: fallback or even font-display: optional. Both have a short swap period (or no swap period), meaning once the fallback font is rendered and the 3 second timeout is exceeded, the font won’t change for the rest of the page lifecycle.

The CSS Fonts Level 4 actually points to this use case in the specifications for fallback:

This value should be used for body text, or any other text where the use of the chosen font is useful and desired, but it’s acceptable for the user to see the text in a fallback font. This value is appropriate to use for large pieces of text.

Finally, I’ve been involved in improving the web performance of GOV.UK for a number of years now. This work has included improving the font loading strategy, in which we currently use font-display: fallback. We have data to suggest many users are on fast connections (4G+), but there’s also data to show that we have users on very slow connections. So I’d personally prefer to take a hit of 100ms on our LCP, where an invisible font is rendered before showing our fallback font (i.e. font-display: fallback), than potentially causing users on very slow connections a delayed font shift (i.e. font-display: swap). So you could consider this post a decision record as to why we use fallback over swap.

Thanks for reading, I hope you found this interesting.

Many thanks to Barry Pollard, Paul Calvano,Ben Schwarz, and Oliver Byford for their technical proof reading of this article.

Source link

r/webdev - What are some of the best user/role permission (ACL) management systems you have seen?

What are some of the best user/role permission (ACL) managem…

I’m currently working on permissions for a new project where there can be multiple users and roles and multiple pages or items that have different permissions. Throughout the years I have seen many many different implementations but I’m not sure I can think of one that is really great.

Starting with one of the worst, some just handle permissions by editing the database directly which is terrible in my opinion.

r/webdev - What are some of the best user/role permission (ACL) management systems you have seen?

table showing user id, project id and then several columns that give permission to different areas of the application.

A better implementation is a grid like this where you have roles on one axis and the different areas of the application on the other axis.

r/webdev - What are some of the best user/role permission (ACL) management systems you have seen?

roles are at the top going across with features in a left column. the other columns are then filled with green checkmarks or red x’s depending on if a permission is allowed or not.

The above grid is sometimes further extended to also show individual users or it may be possible to modify users to have certain permissions that differ from the role. However, this table can get huge if you have hundreds of roles and features and for me I want whatever I do to be future proof.

You could also have roles in its own table and clicking a role lets you edit the features that role has permissions for but it lacks the overview which the above grid has.

Do you have any good examples of applications that handle roles and permissions really well?

Source link

r/webdev - Fdownl - an innovator file sharing platform

an innovator file sharing platform : webdev

Some time ago, we had the idea of creating a file sharing platform that didn’t require visitors to download common uploaded files like images, audio and video files. Fdownl was born from that idea. With a customizable upload form that currently only supports changing the amount of time your files stay in our servers and a share page that not only allows you to view a preview of the requested file but also download it, it makes sharing your files across different devices much easier.

Current Fdownl website link:

A simple preview page:

This web application was made using C# (ASP.NET MVC)

r/webdev - Fdownl - an innovator file sharing platform

Upload Form

r/webdev - Fdownl - an innovator file sharing platform

File Preview

Source link

r/webdev - Review My Portfolio [UI/UX]

Review My Portfolio [UI/UX] : webdev

I posted here a lot asking about different issues that I faced creating my portfolio, and now its kinda done I want some honest review, I’m happy with the mobile version but not really happy with how the pc version turned out, also you might see some weird human back photo xD I will replace that with a png of mine, but I’m too lazy to take photos rn, anyways here’s what I have



Personal thoughts: I think in the home page, placing the socials there is totally not a wise choice so I thought I could put a picture in there, and add some cool hover animation to it and do the socials in the about, but I think that’d make a lot of blank space

Edit, I removed the socials from the home page and added them to contact me page, it looks like this now:

r/webdev - Review My Portfolio [UI/UX]

Source link

What is Headless Chrome

Headless Browsers: A Stepping Stone Towards Developing Smart…

Web development has grown at a tremendous pace with lots of automation testing frameworks coming in for both front-end and backend development. Websites have become smarter and so have the underlying tools and frameworks. With a significant surge in the web development area, browsers have also become smarter. Nowadays, you can find headless browsers, where users can interact with the browser without a GUI. You can even scrape websites in headless browsers using packages like Puppeteer and Node.js.

Efficient web development hugely relies on a testing mechanism for quality assessment before we can push code to production environments. Headless browsers can perform end-to-end testing, smoke testing, etc. at a faster speed, as it is free from overhead memory space required for the UI. Moreover, studies have proved that headless browsers generate more traffic than the non-automated ones. Popular browsers like Chrome can even help in debugging web pages in real-time, analyze performance, notify devs of memory consumption, enable developers to tweak their code and analyze performance in real-time, etc.

Is this evolution of browsers heading towards a smarter web development process? In this post, we will give an overview on headless browsers and understand how they help create a smarter and faster website development process.

What Is a Headless Browser?

A headless browser is simply a browser without the GUI. It has all the capabilities of rendering a website, like a normal browser. Since the GUI is not available in a headless browser, we need to use the command line to interact with the browser. Headless browsers are designed for tasks like automation testing, JavaScript library testing,  and JavaScript simulation and interactions.

One of the biggest reasons for using headless browser or headless browser testing is that it lets you run tests quicker and in a real environment. For example, the combination of Chrome DevTools and headless Chrome lets you edit pages on the fly, which helps you diagnose any problems quickly, ultimately helping you to develop better websites faster. So headless browsers are faster, more flexible, and optimized for performing tasks like web-based automation testing. Like a normal browser, headless browsers are capable of performing tasks like links parsing JavaScript, clicking on links, downloading files, and to execute this we need to use the command line. So it can provide a real browser context without any of the memory consumed for running a full-fledged browser with a GUI.

The Need for a Headless Browser

With advancements in the website development technologies, website testing has taken center stage and emerged as the most essential step in developing high performing websites. Even browsers are becoming smarter as they can load the JavaScript libraries for performing automation testing. This is a significant leap forward in website testing. So take a look at some of the major functions performed by headless browsers.

Enables Faster Web Testing Using Command Line Interface

With headless cross-browser testing, we are saved from using overhead memory consumption in the GUI, hence it enables faster website testing, using the command line as the primary source of interaction. Headless browsers are designed to execute crucial test cases like end-to-end testing which ensures that the flow of an application is performing as designed from start to finish. Headless browsers cater to this use case as they enable faster website testing.

Scraping Websites

The headless browser saves the overhead of opening the GUI, thus enabling faster scraping of websites. In headless browsers we can automate the scraping mechanism and extract the data in a much more optimized manner.

Taking Web Screenshots

Though the headless browsers do not avail any GUI, they do allow the users to take snapshots of the website that they are rendering. It’s very useful in cases where the tester is testing the website and needs to visualize the code effects and save the results in the form of screenshots. In a headless browser you can easily take a large number of screenshots without any actual UI.

Mapping User Journey Across the Websites

Headless browsers allow you to programmatically map the customer journey test cases. Here, headless browsers help users to optimize their experience throughout the decision making journey on your website.

Now that we understand headless browsers and their numerous features, along with the their key quality of being lightweight browsers which help in accelerating the speed of testing, let’s look at the most popular headless browser, Headless Chrome, and what it unlocks for developers.

Diving Into Headless Chrome and Chrome DevTools

There are a number of headless browsers out there, such as Firefox 55 and 56, PhantomJs, Html Unit, Splinter, jBrowserDriver, etc. Chrome 59 is Chrome’s answer to headless browsers. Headless Chrome and Chrome DevTools are quite a powerful combination which give users great out-of-the-box features. So let’s have a look at Headless Chrome and Chrome DevTools.

What Is Headless Chrome?

What is Headless Chrome

Headless Chrome is basically just a regular Chrome browser running in a headless environment without a GUI. This light weight, memory sparing, and quick running browser brings all the features provided by Chromium and Blink rendering engines to the command line.

Automated browsers have always generated more traffic than the non-automated ones. In a recent survey, it was discovered that headless Chrome generated more traffic than the previous leader in the headless space, Phantum Js, within a year of its release.

Apart from this, there are several reasons why Chrome is the most popular headless browser. One of the reasons is that it’s always updating out of the box features, which constantly introduce new trends in web development. It also consists of a rendering engine called Blink, which constantly updates itself as the website evolves. Headless Chrome gives developers the ability to:

  • Test the latest web platform features like ES6 modules, service workers, and streams.
  • Programmatically tap into the Chrome DevTools and make use of awesome features like network throttling, device emulating, website performance analysis, etc.
  • Test multiple levels of navigation.
  • Gather page information.
  • Take screenshots.
  • Create PDFs.

Now let’s have a look on the most common flags you need to know to start working with headless Chrome.

Starting Headless

For starting a headless instance, you need Chrome 59+ and to open the Chrome binary from the command line. If you have Chrome 59+ installed, then start the Chrome with the headless flag, .

To print the DOM, create a PDF, or take screenshots, we can use the following flags:

  • Printing the DOM: The –dump-dom flag prints document.body.innerHTML to stdout.
  • Create a PDF: The –print-to-pdf flag creates a PDF of the page.
  • Taking Screenshots: To capture a screenshot of a page, use the –screenshot flag.

Debugging a Code Without the Browser UI

If you want to debug your code in a headless browser using Chrome’s DevTools then make note of the following flag: –remote-debugging-port=9222. This flag helps you to open headless Chrome in a special mode, wherein Chrome DevTools can interact with the headless browser to edit the web page during run-time. We will dig deeper into Chrome Devtools in the later section of the blog.

For debugging a web page with Chrome DevTools, use the –remote-debugging-port=9222 flag.

What Are Chrome DevTools?

Chrome DevTools are a set of web developer tools built directly into Google Chrome. It helps to debug web pages on the fly and detect the bugs quickly, which ultimately helps to develop websites faster.

The simplest way of opening DevTools is to right-click on your webpage and click Inspect. Now, based on why you’re using DevTools, you can open various consoles. For example, to work with the DOM or CSS you can click on the Elements panel; to see logged messages or run JavaScript in the Console panel; to debug JavaScript, click on the Source panel; to view network activity click on the Network panel; to Analyze the performance of the webpage, click on the Performance panel; to fix memory problems click on the Memory panel.

As we can see, Chrome DevTools is a package of diverse functionalities which helps with debugging a web page in the Chrome browser. But what about the headless browser with no UI, how can we debug the web page with no UI?. Can Chrome DevTools help debug a headless browser? Let’s demystify the ways to debug a headless browser with Chrome DevTools, discuss what is puppeteer in the following sections of the blog.


As discussed earlier, one of the ways to debug a web page in a headless browser is to use the flag –remote-debugging-port=9222 in the command line, which helps you to tap into Chrome DevTools programmatically. But, there is another way to use headless Chrome to perform numerous out-of-the box tasks and make use of headless in a more efficient way. This is where Puppeteer comes into the picture.

Puppeteer Architecture

Puppeteer is a Node Library which provides a high level API to control Chrome over DevTools protocol. Puppeteer is usually headless but can also be configured to use full non-headless Chrome. It provides full access to all the features of headless Chrome and can also fully run Chrome in a remote server, which is very beneficial for automation teams. It would be quite accurate to call Puppeteer the Google Chrome team’s official Chrome headless browser.

One of the greatest advantages of using Puppeteer as an automation framework for testing is that, unlike other frameworks, it is very simple and easy to install.

As Puppeteer is a node JavaScript library, all you need to get started is to install Node.js on your system. Node.js comes with the npm (node package manager) which will help us to install the Puppeteer package.

The Following Code Snippet Will Help You to Install Node.js

Once you are done with installation of Node.js to your machine, you can run the following flag to install puppeteer:

npm i puppeteer 

With this you completed, the installation process for Puppeteer which will also, by default, download the latest version of Chrome.

Why Is Puppeteer So Useful?

Puppeteer provides full access to all the out-of-box features provided the headless Chrome and its constantly updating rendering engine called Blink. Unlike other automation testing frameworks for web applications, like Selenium Web Driver, Puppeteer is so popular because it provides automation for a light weight (UI-less), headless browser which helps to perform testing faster. Likewise, there are multiple functionalities provided by Puppeteer. Let’s have a look at them. Puppeteer can:

  • Help generate screenshots and PDFs of pages.
  • Crawl a single page application and generate pre-rendered content.
  • Automate form submissions, UI testing, end-to-end testing, smoke testing, keyboard input, etc.
  • Create an up-to-date, automated testing environment. This means you can run your tests directly in the latest version of Chrome using the latest JavaScript and browser features.
  • Capture a timeline trace of your site and analyze any performance issues.
  • Test Chrome extensions.
  • Allow you to perform web scraping.

Now that we’ve gone over what Puppeteer can do, let’s have a look at the code for taking screenshot in Puppeteer.

Once this code gets executed, a screenshot will be saved to your system through the path mentioned in the code snippet.


Faster and better web development has always been and will always be the top priority of QA and development teams. The headless browsers (without GUI) being light weight and easy on memory means it can run at high speed while automation testing. They cater to the need for smarter web development. Moreover, they help in testing all modern web platform features, as well as enabling debugging and performance analysis in real-time. They are responsible for load balancing heavy traffic in web applications and support website scraping with the help of npm packages like Puppeteer. Furthermore, the installation of headless browsers is easier than the installation of other web automation frameworks. 

Source link

How to Handle Internationalization in Selenium WebDriver

How to Handle Internationalization in Selenium WebDriver

There are many software products that are built for a global audience. In my tenure as a developer, I have worked on multiple web (website or web app) projects that supported different languages. Though the Selenium framework was used for automation testing, using Internationalization in Selenium WebDriver posed a huge challenge.

The major challenge was to develop automated tests that could be run on all localized UIs. When performing Internationalization in Selenium WebDriver, it is essential to create an Internationalization testing checklist to ensure that the interface of the application is displayed in the language corresponding to the client locale.

Google is a classic example of a product that is available across multiple regions – shown below are screenshots of the search page in Japanese and Hebrew:

Many developers use Internationalization testing and Localization testing interchangeably but there is a massive difference between the two. In this article, we take a look at how Internationalization in Selenium WebDriver can be used for performing Selenium test automation for software products that cater to a global audience. We also look at some of the essential points that should be a part of your Internationalization testing checklist for ensuring efficiency in the Internationalization tests.

What Is Internationalization Testing?

Internationalization is the technique of designing and preparing the software product (or application) so that it is usable in different languages, regions, and locales across the globe. Internationalization is also termed i18n, where 18 is the count of the number of letters between ‘i’ and ‘n’ in the word Internationalization.

The locale is the data that identifies the region in which the customer (or consumer) uses that particular language (or it’s variant). The formatting (and parsing) of data such as dates, currencies, numbers, etc. and the translated names of countries, languages, etc. is determined by the locale.

A software product with a global audience should be designed so that it adapts to different languages and regions without many changes. Internationalization testing is the process of testing the software for international support (i.e. different locales) while ensuring that there is no breakage in the functionality or loss of data or compromise of data security (and integrity).

Internationalization Testing Checklist

When the website (or web app) is built to handle different languages, it becomes essential to test all the features against those languages. Internationalization in Selenium WebDriver can be performed using Selenium test automation against the intended locales (or languages).

Before you proceed with the test execution, you should make sure all the requirements in the Internationalization testing checklist are met. Here are some of the necessary things that should comprise your Internationalization testing checklist:

  • The software should be developed in a manner that the deployment of Internationalization features can be done with ease. Testing should make sure that the rendering of the page using the particular locale performs as expected.

  • If there are some custom installations for catering to certain locales, the settings should be verified as a part of Internationalization in Selenium WebDriver activity.

  • Selenium test automation should be used for testing whether the interface of the application is displayed in native language strings corresponding to that of the client locale. This includes date time formats, data presentation, numeric formats, etc., in accordance with the specified language and other cultural preferences.

  • When developing the ‘localized features’ in the product, developers ensure that localizable elements are not tightly coupled with the source code. For example, strings (or other localized content) in different languages can be set in resource files so that those can be loaded whenever required (depending on the client locale).

  • Internationalization testing should check whether specific language property files are a part of the resource bundles. Depending on the primary market, a default language should be set for the entire application. For example, if the application is built for the ‘French’ market, the primary language can be set to French (instead of English).

  • Internationalization testing (or i18n testing) should verify if the interface is displayed in the ‘default language’ when accessed from an environment which is different from the client locale.

  • The display order of address differs from one language to another. For example, the order in English is name, city, state, and postal code. Whereas the order in Japanese is postal code, state, city, and name. Hence, Internationalization testing should verify whether the display order (with respect to important elements like address, etc.) is maintained as per the client locale.

These are some of the pivotal points that should be a part of your Internationalization testing checklist. Apart from these checks, you could also add specific rules to be followed when performing Internationalization in Selenium WebDriver. To help you finalize those rules, let’s see how to handle Internationalization in Selenium WebDriver.

Internationalization in Selenium WebDriver

There are two commonly used mechanisms for requesting the language on a localized website (or app). You have the option to use a ‘country specific URL’ as done by web products like Google. The commonly used option is choosing the language based on the Accept-Language header sent by the browser.

When performing Selenium-based test automation, I came across the Locale Switcher add-on for Chrome that lets you switch browser locales for testing the localization of your website. It supports 500 locales from across the world. Internationalization in Selenium WebDriver is demonstrated using popular languages supported by the Selenium framework (i.e. Python, Java, and C#).

Internationalization in Selenium WebDriver, or Chrome and Firefox in this case, is done by setting theintl.accept_languages preference to the necessary BCP 47 tag in the profile. The profile should be set in DriverOptions so that the updated locale is reflected in the profile.

Alternatively, the addArguments option for DriverOptions can also be used for setting the necessary locale in Chrome. Apart from the difference in syntax, the fundamentals of using DriverOptions across Python, C#, and Java remains the same. I made use of Locale Switcher for determining the locale that has to be used in the implementation.

Internationalization in ChromeDriver

Let’s take a look at how to understand Internationalization in Selenium WebDriver. First, I will see how we can achieve Internationalization testing in Selenium ChromeDriver.

Here is the code snippet for achieving Internationalization for Selenium Python:

Here is the code snippet for achieving Internationalization for Selenium Java

Here is the code snippet for achieving Internationalization for Selenium C#

Internationalization in FirefoxDriver

In this section, we will implement Internationalization in Selenium FirefoxDriver.

Here is the code snippet for achieving Internationalization for Selenium Python:

Here is the code snippet for achieving Internationalization for Selenium Java

Here is the code snippet for achieving Internationalization for Selenium C#

Demonstrating Internationalization in Selenium WebDriver

For demonstrating Internationalization in Selenium WebDriver for Chrome and Firefox, the following test scenarios are used:

Test Combination (Browser – Chrome 80.0, Platform – Windows 10, Locale – he-IL)

  1. Set the Chrome browser locale to ‘he-IL’.

  2. Navigate to the following URL:

  3. Assert if the locale is not set properly.

Test Combination (Browser – Firefox 78.0, Platform – macOS Mojave, Locale – ja-JP)

  1. Set the Firefox browser locale to ‘ja-JP’.

  2. Navigate to the following URL:

  3. Assert if the locale is not set properly.

The test scenarios are executed on the cloud-based Selenium Grid in LambdaTest. Once you’ve created an account with the LambdaTest platform, make a note of the user-name and access-key from the profile page. The combination of user-name and access-key is used for accessing the Grid on LambdaTest. The browser capabilities for the corresponding browser and platform combination is generated using the LambdaTest capabilities generator.

WebDriver Internationalization With Selenium Python

When performing Internationalization in Selenium WebDriver, the implementation that uses Firefox and Chrome only differs in the way we set the browser locale – the rest of the implementation remains unchanged.

Code Walk Through

Step 1: The capabilities for the two test scenarios are generated using the LambdaTest capabilities generator. Shown below is the capabilities for Test Scenario 1:

Step 2: The functions driver_chrome_init() and driver_ff_init() are used for the initialization and de-initialization of the Chrome and Firefox browsers, respectively. These functions are used with the @pytest.mark.usefixtures in the test code.

The scope is set to ‘class’ so that a fresh browser instance is used for every test scenario.

Step 3: The required locale, i.e. ‘he-IL’, is set using the instance of ChromeOptions (i.e chrome_options). The browser capabilities are also set using chrome_options instead of DesiredCapabilities, since ChromeOptions offers convenient methods for setting ChromeDriver specific capabilities. This is an important step in implementing Internationalization with Selenium WebDriver.

Step 4The add_argument method in ChromeOptions is used for setting the necessary locale.

Step 5The combination of user-name and access-key are used for accessing the LambdaTest Grid URL .

Step 6The Selenium WebDriver API uses the LambdaTest Grid URL and browser capabilities (i.e. ch_capabilities) that was generated using the LambdaTest Capabilities Generator.

Step 7A new Firefox Profile is created using the FirefoxProfile() method offered by the webdriver package.

Step 8The set_preference() method in FirefoxProfile is used with intl.accept_languages set to ‘ja-JP’ (i.e. locale under test). The update_preferences() method is called to update the preferences set using the set_preference() method.

Code Walk Through

The implementation of Internationalization in Selenium WebDriver is self-explanatory hence, I will not go deeper into the aspects of implementation.

To verify whether the browser locale is set to the expected locale (i.e. he-IL for Test Scenario 1 and ja-JP for Test Scenario 2), the driver.execute_script() method is used for executing JavaScript within the browser window.

The JS code returns the browser language which is compared with the expected locale. assert is raised if the language does not match the expected locale.


The execution is performed by invoking the following command on the terminal:

Here is the screenshot of the execution which indicates the tests passed (i.e. the browser locale was set without any issues):

WebDriver Internationalization With Selenium C#

Like Python, the implementation of Internationalization in Selenium WebDriver that uses Firefox and Chrome only differs in the handling of the browser locale.

Code WalkT hrough

Step 1: The FluentAssertions package provides an extensive set of extension methods for checking the expected outcome (w.r.t locale). Hence, the FluentAssertions package is installed by executing the following command on the Package Manager Console.

The package is imported before it is used in the cod:

using FluentAssertions;

Step 2An instance of ChromeOptions is created. The AddArgument() method of ChromeOptions is used for setting the language to Hebrew (i.e. he-IL).

Step 3: The chromeOptions.ToCapabilities() method is used for passing the required browser capabilities to the RemoteWebDriver interface.

The combination of user-name and access-key is used for accessing the Selenium Grid cloud on LambdaTest []

Step 4Like with Python, I use the ExecuteScript method for executing JavaScript code that returns the language of the browser window.

Step 5The Should() method offered by FluentAssertions is used for checking whether the browser language is set to Hebrew (i.e. he-IL).


Code Walk Through

The implementation for Test Scenario 2 (that uses Firefox) only differs in the way the locale is set for the browser.

An instance of FirefoxOptions is created and the SetPreference() method is used for setting the browser locale to Japanese (i.e. ‘ja-JP’). intl.accept_languages is the preferences key for manipulating the language for the web page to ja-JP.

For clarity, I have marked the code changes for running Test Scenario 1 and Test Scenario 2 using Java: 


We used Visual Studio Code 2019 (Community Edition) for creating a project and running the tests. Shown below is the execution snapshot of Internationalization in Selenium WebDriver from LambdaTest which indicates that the browser locale was set as expected:

WebDriver Internationalization With Selenium Java

The TestNG framework is used in the implementation for Selenium test automation. For a quick recap about the TestNG framework, you can refer to this detailed coverage on TestNG annotations.

Code Walk Through

Step 1: The TestNG framework is used for Selenium test automation, hence the necessary packages are included before I start with the implementation of Internationalization in Selenium WebDriver.

Step 2: The implementation under the @BeforeClass annotation will be executed before the test case is triggered. Like the implementations in Python and C#, an instance of ChromeOptions is used and the addArguments() method is used for setting the language to ‘he-IL’.

Step 3: Test case implementation is included under the @Test annotation. JavaScript code is triggered using the interface offered by JavascriptExecutor. The executeScript method is used for getting the browser locale (i.e. window.navigator.userlanguage or window.navigator.language).

Code Walk Through

The implementation for Test Scenario 2 differs in the way its locale is set. An instance of FirefoxOptions() is created so that the browser capabilities can be set.

An instance of FirefoxProfile() is created for a customized profile that suits our needs. The setPreference() method modifies the intl.accept_languageskey to ‘ja-JP’. 

The modified profile is set using the setProfile() method of FirefoxOptions

Rest of the implementation is same as the one used for Test Scenario 1, the code changes are shown below for better understanding:


I used the IntelliJ IDEA IDE for creating a project and running the tests. Shown below is the execution snapshot from LambdaTest which indicates that the browser locale was set as expected:

Wrap Up

Internationalization testing is extremely important for companies that develop products for a global audience. The resources (i.e. strings, images, etc.) that are specific to a particular locale should be coupled less tightly from the resources that are used for the main target market. 

Before using Internationalization in Selenium WebDriver for automation testing, it is necessary to follow the i18n rules mentioned in the Internationalization testing checklist. Depending on the browser share for the primary target market, you should lay down a plan to use Selenium test automation for Internationalization testing of your web product. The options used in Internationalization in Selenium WebDriver depend on the browser on which the locale is set, hence, it is important to prioritize the testing on browsers that matter the most. 

Happy testing!

Source link