Una Kravets

the new CSS property that boosts your rendering performance

Improve initial load time by skipping the rendering of offscreen content.

property, launching in Chromium 85, might be one of the most impactful new CSS
properties for improving page load performance. content-visibility enables the
user agent to skip an element’s rendering work, including layout and painting,
until it is needed. Because rendering is skipped, if a large portion of your
content is off-screen, leveraging the content-visibility property makes the
initial user load much faster. It also allows for faster interactions with the
on-screen content. Pretty neat.

demo with figures representing network results
In our article demo, applying content-visibility: auto to chunked content areas gives a 7x rendering performance boost on initial load. Read on to learn more.

Browser support

content-visibility relies on primitives within the the CSS Containment
. While content-visibility is only
supported in Chromium 85 for now (and deemed “worth
Firefox), the Containment Spec is supported in most modern

CSS Containment

The key and overarching goal of CSS containment is to enable rendering
performance improvements of web content by providing predictable isolation of
a DOM subtree
from the rest of the page.

Basically a developer can tell a browser what parts of the page are encapsulated
as a set of content, allowing the browsers to reason about the content without
needing to consider state outside of the subtree. Knowing which bits of content
(subtrees) contain isolated content means the browser can make optimization
decisions for page rendering.

There are four types of CSS
each a potential value for the contain CSS property, which can be combined
together in a space-separated list of values:

  • size: Size containment on an element ensures that the element’s box can be
    laid out without needing to examine its descendants. This means we can
    potentially skip layout of the descendants if all we need is the size of the
  • layout: Layout containment means that the descendants do not affect the
    external layout of other boxes on the page. This allows us to potentially skip
    layout of the descendants if all we want to do is lay out other boxes.
  • style: Style containment ensures that properties which can have effects on
    more than just its descendants don’t escape the element (e.g. counters). This
    allows us to potentially skip style computation for the descendants if all we
    want is to compute styles on other elements.
  • paint: Paint containment ensures that the descendants of the containing box
    don’t display outside its bounds. Nothing can visibly overflow the element,
    and if an element is off-screen or otherwise not visible, its descendants will
    also not be visible. This allows us to potentially skip painting the
    descendants if the element is offscreen.

Skipping rendering work with content-visibility

It may be hard to figure out which containment values to use, since browser
optimizations may only kick in when an appropriate set is specified. You can
play around with the values to see what works
, or you
can use another CSS property called content-visibility to apply the needed
containment automatically. content-visibility ensures that you get the largest
performance gains the browser can provide with minimal effort from you as a

The content-visibility property accepts several values, but auto is the one
that provides immediate performance improvements. An element that has
content-visibility: auto gains layout, style and paint containment. If
the element is off-screen (and not otherwise relevant to the user—relevant
elements would be the ones that have focus or selection in their subtree), it
also gains size containment (and it stops
its contents).

What does this mean? In short, if the element is off-screen its descendants are
not rendered. The browser determines the size of the element without considering
any of its contents, and it stops there. Most of the rendering, such as styling
and layout of the element’s subtree are skipped.

As the element approaches the viewport, the browser no longer adds the size
containment and starts painting and hit-testing the element’s content. This
enables the rendering work to be done just in time to be seen by the user.

Example: a travel blog

In this example, we baseline our travel blog on the right, and apply content-visibility: auto to chunked areas on the left. The results show rendering times going from 232ms to 30ms on initial page load.

A travel blog typically contains a set of stories with a few pictures, and some
descriptive text. Here is what happens in a typical browser when it navigates to
a travel blog:

  1. A part of the page is downloaded from the network, along with any needed
  2. The browser styles and lays out all of the contents of the page, without
    considering if the content is visible to the user.
  3. The browser goes back to step 1 until all of the page and resources are

In step 2, the browser processes all of the contents looking for things that may
have changed. It updates the style and layout of any new elements, along with
the elements that may have shifted as a result of new updates. This is rendering
work. This takes time.

A screenshot of a travel blog.
An example of a travel blog. See Demo on Codepen

Now consider what happens if you put content-visibility: auto on each of the
individual stories in the blog. The general loop is the same: the browser
downloads and renders chunks of the page. However, the difference is in the
amount of work that it does in step 2.

With content-visibility, it will style and layout all of the contents that are
currently visible to the user (they are on-screen). However, when processing the
story that is fully off-screen, the browser will skip the rendering work and
only style and layout the element box itself.

The performance of loading this page would be as if it contained full on-screen
stories and empty boxes for each of the off-screen stories. This performs much
better, with expected reduction of 50% or more from the rendering cost of
loading. In our example, we see a boost from a 232ms rendering time to a
30ms rendering time. That’s a 7x performance boost.

What is the work that you need to do in order to reap these benefits? First, we
chunk the content into sections:

An annotated screenshot of chunking content into sections with a CSS class.
Example of chunking content into sections with the story class applied, to receive content-visibility: auto. See Demo on Codepen

Then, we apply the following style rule to the sections:

.story {
content-visibility: auto;
contain-intrinsic-size: 1000px;

Note that as content moves in and out of visibility, it will start
and stop being rendered as needed. However, this does not mean that the browser
will have to render and re-render the same content over and over again, since
the rendering work is saved when possible.

Specifying the natural size of an element with contain-intrinsic-size

In order to realize the potential benefits of content-visibility, the browser
needs to apply size containment to ensure that the rendering results of contents
do not affect the size of the element in any way. This means that the element
will lay out as if it was empty. If the element does not have a height specified
in a regular block layout, then it will be of 0 height.

This might not be ideal, since the size of the scrollbar will shift, being
reliant on each story having a non-zero height.

Thankfully, CSS provides another property, contain-intrinsic-size, which
effectively specifies the natural size of the element if the element is
affected by size containment
. In our example, we are setting it to 1000px as
an estimate for the height and width of the sections.

This means it will lay out as if it had a single child of “intrinsic-size”
dimensions, ensuring that your unsized divs still occupy space.
contain-intrinsic-size acts as a placeholder size in lieu of rendered content.

Hiding content with content-visibility: hidden

What if you want to keep the content unrendered regardless of whether or not it
is on-screen, while leveraging the benefits of cached rendering state? Enter:
content-visibility: hidden.

The content-visibility: hidden property gives you all of the same benefits of
unrendered content and cached rendering state as content-visibility: auto does
off-screen. However, unlike with auto, it does not automatically start to
render on-screen.

This gives you more control, allowing you to hide an element’s contents and
later unhide them quickly.

Compare it to other common ways of hiding element’s contents:

  • display: none: hides the element and destroys its rendering state. This
    means unhiding the element is as expensive as rendering a new element with the
    same contents.
  • visibility: hidden: hides the element and keeps its rendering state. This
    doesn’t truly remove the element from the document, as it (and it’s subtree)
    still takes up geometric space on the page and can still be clicked on. It
    also updates the rendering state any time it is needed even when hidden.

content-visibility: hidden, on the other hand, hides the element while
preserving its rendering state, so, if there are any changes that need to
happen, they only happen when the element is shown again (i.e. the
content-visibility: hidden property is removed).

Some great use cases for content-visibility: hidden are when implementing
advanced virtual scrollers, and measuring layout.


content-visibility and the CSS Containment Spec mean some exciting performance
boosts are coming right to your CSS file. For more information on these
properties, check out:

Last updated:

Improve article

Source link

Pokémon Pokédex app

How to Build a Pokedex React App with a Slash GraphQL Backen…

Frontend developers want interactions with the backends of their web applications to be as painless as possible. Requesting data from the database or making updates to records stored in the database should be simple so that frontend developer can focus on what they do best: creating beautiful and intuitive user interfaces.

GraphQL makes working with databases easy. Rather than relying on backend developers to create specific API endpoints that return pre-selected data fields when querying the database, frontend developers can make simple requests to the backend and retrieve the exact data that they need—no more, no less. This level of flexibility is one reason why GraphQL is so appealing.

Even better, you can use a hosted GraphQL backend—Slash GraphQL (by Dgraph). This service is brand new and was publicly released on September 10, 2020. With Slash GraphQL, I can create a new backend endpoint, specify the schema I want for my graph database, and—voila!—be up and running in just a few steps.

The beauty of a hosted backend is that you don’t need to manage your own backend infrastructure, create and manage your own database, or create API endpoints. All of that is taken care of for you.

In this article, we’re going to walk through some of the basic setup for Slash GraphQL and then take a look at how I built a Pokémon Pokédex app with React and Slash GraphQL in just a few hours!

You can view all of the code here on GitHub.

Overview of the Demo App

Pokémon Pokédex app

Pokémon Pokédex app

What 90s child (or adult, for that matter) didn’t dream of catching all 150 original Pokémon? Our demo app will help us keep track of our progress in becoming Pokémon masters.

As we build out our app, we’ll cover all the CRUD operations for working with an API: create, read, update, and delete.

We’ll start by adding all our Pokémon to the database online in Slash GraphQL’s API Explorer. Then, in the Pokédex app UI, we’ll display all 151 Pokémon queried from the database. (Hey, I couldn’t leave out Mew, could I?) At the top of the screen, we’ll show two dropdown menus that will allow us to filter the shown results by Pokémon type and by whether or not the Pokémon has been captured. Each Pokémon will also have a toggle switch next to it that will allow us to mark the Pokémon as captured or not. We won’t be deleting any Pokémon from our database via the app’s UI, but I’ll walk you through how that could be done in the event that you need to clean up some data.

Ready to begin our journey?

Getting Started with Slash GraphQL

Creating a New Backend

Once you’ve created your Slash GraphQL account, you can have your GraphQL backend up and running in just a few steps:

  1. Click the “Create a Backend” button.
  2. Give it a name. (For example, I chose “pokedex”.)
  3. Optionally, give the API endpoint URL a subdomain name. (Again, I chose “pokedex”.)
  4. Optionally, choose a provider and a zone. (This defaults to using AWS in the US West region.)
  5. Click the “Create New Backend” button to confirm your choices.
  6. Get your backend endpoint. (Mine looks like this: https://pokedex.us-west-2.aws.cloud.dgraph.io/graphql.)
  7. Click the “Create your Schema” button.

That’s it! After creating a new backend, you’ll have a live GraphQL database and API endpoint ready to go.

Creating a New Backend

Creating a New Backend

Creating a Schema

Now that we have our backend up and running, we need to create the schema for the type of data we’ll have in our database. For the Pokédex app, we’ll have a Pokémon type and a PokémonType enum.

There’s a lot to unpack in that small amount of code! The PokémonType enum is straightforward enough—it’s a set of all the Pokémon types, including Fire, Water, Grass, and Electric. The Pokémon type describes the shape of our data that we’ll have for each Pokémon. Each Pokémon will have an ID, a name, an image URL for displaying the Pokémon’s picture, the types of Pokémon it is, and a status indicating whether or not the Pokémon is captured.

You can see that each field has a data type associated with it. For example, id is an Int (integer), name and imgUrl are String types, and captured is a Boolean. The presence of an exclamation point ! means the field is required. Finally, adding the @search keyword makes the field searchable in your queries and mutations.

To test out working with our database and newly created schema, we can use the API Explorer, which is a neat feature that allows us to run queries and mutations against our database right from within the Slash GraphQL web console. 

Populating Our Database

Let’s use the API Explorer to insert all of our Pokémon into the Pokédex database. We’ll use the following mutation:

For brevity I’ve only shown the first nine Pokémon in the snippet above. Feel free to check out the full code snippet for adding all the Pokémon.

Adding all the Pokémon via the API Explorer

Adding all the Pokémon via the API Explorer


Now, for a quick sanity check, we can query our database to make sure that all our Pokémon have been added correctly. We’ll request the data for all our Pokémon like so:

Here’s what it looks like in the API Explorer:

Querying for all Pokémon in the API Explorer

Querying for all Pokémon in the API Explorer

We could also write a similar query that only returns the Pokémon names if that’s all the data we need. Behold, the beauty of GraphQL!

Querying for All Pokémon Names in the API Explorer

Querying for All Pokémon Names in the API Explorer

Fetching Data in the App

Now that we’ve added our Pokémon to the Pokédex and verified the data is in fact there, let’s get it to show up in our app. Our app was built with React and Material UI for the frontend and was bootstrapped using create-react-app. We won’t be going through step-by-step how to build the app, but we’ll highlight some of the key parts. Again, the full code is available on GitHub if you’d like to clone the repo or just take a look.

When using Slash GraphQL in our frontend code, we essentially just make a POST request to our single API endpoint that we were provided when creating the backend. In the body of the request, we provide our GraphQL code as the query, we write a descriptive name for the query or mutation as the operationName, and then we optionally provide an object of any variables we reference in our GraphQL code.

Here’s a simplified version of how we follow this pattern to fetch our Pokémon in the app:

We then take that data and loop over it using the Array map helper function to display each Pokémon in the UI.

The filters at the top of the page are hooked up to our API as well. When the filter values change, a new API request kicks off, but this time with a narrower set of search results. For example, here are all the Fire type Pokémon that we’ve captured:

Captured Fire-type Pokémon

Captured Fire-type Pokémon

The JavaScript for making an API request for Pokémon filtered by type and captured status looks a little like this:


Updating Data in the App

At this point we’ve sufficiently covered creating Pokémon from the API Explorer and fetching Pokémon within our Pokédex app via JavaScript. But what about updating Pokémon? Each Pokémon has a toggle switch that controls the Pokémon’s captured status. Clicking on the toggle updates the Pokémon’s captured status in the database and then updates the UI accordingly.

Here is our JavaScript to update a Pokémon:

We then call the updatePokemonCapturedStatus function when the toggle value changes. This kicks off the API request to update the value in the database. Then, we can either optimistically update the UI without waiting for a response from the backend, or we can wait for a response and merge the result for the single Pokémon into our frontend’s larger dataset of all Pokémon. We could also simply request all the Pokémon again and replace our frontend’s stored Pokémon info with the new result, which is what I chose to do.

Deleting Data from the Database

The last of the CRUD operations is “delete”. We won’t allow users to delete Pokémon from within the app’s UI; however, as the app admin, we may need to delete any mistakes or unwanted data from our database. To do so, we can use the API Explorer again.

For example, if we found that we have an extra Bulbasaur in our Pokédex, we could delete all the Bulbasaurs:

Deleting All Bulbasaur Pokémon Via the API Explorer

Deleting All Bulbasaur Pokémon Via the API Explorer

Then, we could add one Bulbasaur back:


Wrapping Up

So, what did we learn? By now we should understand how to work with Slash GraphQL in the context of a React app. We’ve covered all the CRUD operations to make a pretty sweet Pokédex app. We may have even caught a few Pokémon along the way.

Hopefully we didn’t… hurt ourselves in confusion… [cue audible groans from the readers].

We haven’t yet covered how to add authentication to secure our app or how to use the Apollo client when making our GraphQL requests, but those are important topics for another article!

As an experienced frontend developer but without much experience using GraphQL, working with Slash GraphQL was refreshingly easy. Getting set up was a breeze, and the API Explorer along with the documentation played a crucial role in helping me explore the various queries and mutations I could make with my data.

Slash GraphQL, I choose you! [more audible groans from the readers]

Source link

Microsoft Surface with a keyboard, trackpad, external bluetooth mouse, touchscreen.

Interaction Media Features and Their Potential (for Incorrec…

This is an updated and greatly expanded version of the article originally published on dev.opera back in 2015. That article referenced the Editor’s Draft, 24 March 2015 of the specification Media Queries Level 4, and contained a fairly big misunderstanding about how any-hover:none would end up being evaluated by browsers in practice.

The spec has since been updated (including clarifications and examples that I submitted following the publication of the original article), so this updated version removes the incorrect information of the original and brings the explanations in line with the most recent working draft. It also covers additional aspects relating to JavaScript touch/input detection.

The Media Queries Level 4 Interaction Media Featurespointer, hover, any-pointer and any-hover — are meant to allow sites to implement different styles and functionality (either CSS-specific interactivity like :hover, or JavaScript behaviors, when queried using window.matchMedia), depending on the particular characteristics of a user’s input devices.

Although the specification is still in working draft, interaction media features are generally well supported, though, to date, there are still some issues and inconsistencies in the various browser implementations — see the recent pointer/hover/any-pointer/any-hover test results, with references to relevant browser bugs.

Common use cases cited for interaction media features are often “make controls bigger/smaller depending on whether the users has a touchscreen device or is using a mouse/stylus” and “only use a CSS dropdown menu if the user has an input that allows hover-based interactions.”

@media (pointer: fine) {
  /* using a mouse or stylus - ok to use small buttons/controls */
@media (pointer: coarse) {
  /* using touch - make buttons and other "touch targets" bigger */
@media (hover: hover) {
  /* ok to use :hover-based menus */
@media (hover: none) {
  /* don't use :hover-based menus */

There are also examples of developers using these new interaction media features as a way of achieving standards-based “touch detection,” often just for listening to touch events when the device is identified as having a coarse pointer.

if (window.matchMedia && window.matchMedia("(pointer:coarse)").matches) {
  /* if the pointer is coarse, listen to touch events */
  target.addEventListener("touchstart", ...);
  // ...
} else {
  /* otherwise, listen to mouse and keyboard events */
  // ...

However, these approaches are slightly naive, and stem from a misunderstanding of what these interaction media queries are designed to tell us.

What’s the primary input?

One of the limitations of pointer and hover is that, by design, they only expose the characteristics of what a browser deems to be the primary pointer input. What the browser thinks, and what a user is actually using as their primary input, may differ — particularly now that the lines between devices, and the types of inputs they support, is becoming more and more blurry.

Microsoft Surface with a keyboard, trackpad, external bluetooth mouse, touchscreen.
Which one’s the “primary” input? the answer may depend on the activity.

Right out of the gate, it’s worth noting that interaction media features only cover pointer inputs (mouse, stylus, touchscreen). They don’t provide any way of detecting if a user’s primary input is a keyboard or keyboard-like interface, such as a switch control. In theory, for a keyboard user, a browser could report pointer: none, signaling that the user’s primary input is not a pointer at all. However, in practice, no browser offers a way for users to specify that they are in fact keyboard users. So keep in mind that, regardless of what the interaction media feature queries may return, it’s worth making sure that your site or app also works for keyboard users.

Traditionally, we could say that a phone or tablet’s primary input is the touchscreen. However, even on these devices, a user may have an additional input, like a paired bluetooth mouse (a feature that has been available for years on Android, is now supported in iPadOS, and is sure land in iOS), that they are using as their primary input.

An Android phone with a paired bluetooth keyboard and mouse, with the screen showing an actual mouse pointer and right-click context menu in Chrome
An iPad with a paired bluetooth keyboard, mouse, and Apple Pencil, with the screen showing the mouse “dot” and right-click context menu in Safari

In this case, while the device nominally has pointer: coarse and hover: none, users may actually be using a fine pointer device that is capable of hovers. Similarly, if a user has a stylus (like the Apple Pencil), their primary input may still be reported as the touchscreen, but rather than pointer: coarse, they now have an input that can provide fine pointer accuracy.

In these particular scenarios, if all the site is doing is making buttons and controls bigger and avoiding hover-based interactions, that would not be a major problem for the user: despite using a fine and hover-capable mouse, or a fine but still not hover-capable stylus, they will get styling and functionality aimed at the coarse, non-hover-capable touchscreen.

If the site is using the cues from pointer: coarse for more drastic changes, such as then only listening to touch events, then that will be problematic for users — see the section about incorrect assumptions that can completely break the experience.

However, consider the opposite: a “regular” desktop or laptop with a touchscreen, like Microsoft’s Surface. In most cases, the primary input will be the trackpad/mouse — with pointer:fine and hover:hover — but the user may well be using the touchscreen, which has coarse pointer accuracy and does not have hover capability. If styling and functionality are then tailored specifically to rely on the characteristics of the trackpad/mouse, the user may find it problematic or impossible to use the coarse, non-hover-capable touchscreen.

Feature Touchscreen Touchscreen + Mouse Desktop/Laptop Desktop/Laptop + Touchscreen
pointer:coarse true true false false
pointer:fine false false true true
hover:none true true false false
hover:hover false false true true

For a similar take on this problem, see ”The Good & Bad of Level 4 Media Queries” by Stu Cox. While it refers to an even earlier iteration of the spec that only contained pointer and hover and a requirement for these features to report the least capable, rather than the primary, input device.

The problem with the original pointer and hover on their own is that they don’t account for multi-input scenarios, and they rely on the browser to be able to correctly pick a single primary input. That’s where any-pointer and any-hover come into play.

Testing the capabilities of all inputs

Instead of focusing purely on the primary pointer input, any-pointer and any-hover report the combined capabilities of all available pointer inputs.

In order to support multi-input scenarios, where different (pointer-based) inputs may have different characteristics, more than one of the values for any-pointer (and, theoretically, any-hover, but this aspect is useless as we’ll see later) can match, if different input devices have different characteristicsprimary pointer input). In current implementations, these media features generally evaluate as follows:

Feature Touchscreen Touchscreen + Mouse Desktop/Laptop Desktop/Laptop + Touchscreen
any-pointer:coarse true true false true
any-pointer:fine false true true true
any-hover:none false false false false
any-hover:hover false true true true
Comparison of Firefox on Android’s media query results with just the touchscreen, and when adding a bluetooth mouse. Note how pointer and hover remain the same, but any-pointer and any-hover change to cover the new hover-capable fine input.

Going back to the original use cases for the interaction media features, instead of basing our decision to provide larger or smaller inputs or to enable hover-based functionality only on the characteristics of the primary pointer input, we can make that decision based on the characteristics of any available pointer inputs. Roughly translated, instead of saying “make all controls bigger if the primary input has pointer: coarse” or “only offer a CSS menu if the primary input has hover: hover,” we can build media queries that equate to saying, “if any of the pointer inputs is coarse, make the controls bigger” and “only offer a hover-based menu if at least one of the pointer inputs available to the user is hover-capable.”

@media (any-pointer: coarse) {
  /* at least one of the pointer inputs
    is coarse, best to make buttons and 
    other "touch targets" bigger (using 
    the query "defensively" to target 
    the least capable input) */
@media (any-hover: hover) {
  /* at least one of the inputs is 
     hover-capable, so it's at least 
     possible for users to trigger
     hover-based menus */

Due to the way that any-pointer and any-hover are currently defined (as “the union of capabilities of all pointing devices available to the user”), any-pointer: none will only ever evaluate to true if there are no pointer inputs available, and, more crucially, any-hover: none will only ever be true if none of the pointer inputs present are hover-capable. Particularly for the latter, it’s therefore not possible to use the any-hover: none query to determine if only one or more of the pointer inputs present is not hover-capable — we can only use this media feature query to determine whether or not all inputs are not hover-capable, which is something that can just as well be achieved by checking if any-hover: hover evaluates to false. This makes the any-hover: none query essentially redundant.

We could work around this by inferring that if any-pointer: coarse is true, it’s likely a touchscreen, and generally those inputs are not hover-capable, but conceptually, we’re making assumptions here, and the moment there’s a coarse pointer that is also hover-capable, that logic falls apart. (And for those doubting that we may ever see a touchscreen with hover, remember that some devices, like the Samsung Galaxy Note and Microsoft’s Surface, have a hover-capable stylus that is detected even when it’s not touching the digitizer/screen, so some form of “hovering touch” detection may not be out of the question in the future.)

Combining queries for more educated guesses

The information provided by any-pointer and any-hover can of course be combined with pointer and hover, as well as the browser’s determination of what the primary input is capable of, for some slightly more nuanced assessments.

@media (pointer: coarse) and (any-pointer: fine) {
  /* the primary input is a touchscreen, but
     there is also a fine input (a mouse or 
     perhaps stylus) present. Make the design
     touch-first, mouse/stylus users can
     still use this just fine (though it may 
     feel a big clunky for them?) */
@media (pointer: fine) and (any-pointer: coarse) {
  /* the primary input is a mouse/stylus,
     but there is also a touchscreen 
     present. May be safest to make 
     controls big, just in case users do 
     actually use the touchscreen? */
@media (hover: none) and (any-hover: hover) {
  /* the primary input can't hover, but
     the user has at least one other
     input available that would let them
     hover. Do you trust that the primary
     input is in fact what the user is 
     more likely to use, and omit hover-
     based interactions? Or treat hover 
     as as something optional — can be 
     used (e.g. to provide shortcuts) to 
     users that do use the mouse, but 
     don't rely on it? */

Dynamic changes

Per the specification, browsers should re-evaluate media queries in response to changes in the user environment. This means that pointer, hover, any-pointer, and any-hover interaction media features can change dynamically at any point. For instance, adding/removing a bluetooth mouse on a mobile/tablet device will trigger a change in any-pointer / any-hover. A more drastic example would be a Surface tablet, where adding/removing the device’s “type cover” (which includes a keyboard and trackpad) will result in changes to the primary input itself (going from pointer: fine / hover: hover when the cover is present, to pointer: coarse / hover: none when the Surface is in “tablet mode”).

Screenshots of Firefox on a Surface tablet. With the cover attached, pointer:finehover:hoverany-pointer:coarseany-pointer:fine, and any-hover:hover are true; once the cover is removed (and Windows asks if the user wants to switch to “tablet mode”), touch becomes the primary input with pointer:coarse and hover:none, and only any-pointer:coarse and any-hover:none are true.

If you’re modifying your site’s layout/functionality based on these media features, be aware that the site may suddenly change “under the user’s feet” whenever the inputs change — not just when the page/site is first loaded.

Media queries may not be enough — roll on scripting

The fundamental shortcoming of the interaction media features is that they won’t necessarily tell us anything about the input devices that are in use right now. For that, we may need to dig deeper into solutions, like What Input?, that keep track of the specific JavaScript events fired. But of course, those solutions can only give us information about the user’s input after they have already started interacting with the site — at which point it may be too late to make drastic changes to your layout or functionality.

Keep in mind that even these JavaScript-based approaches can just as easily lead to incorrect results. That’s especially true on mobile/tablet platforms, or in situations where assistive technologies are involved, where it is common to see “faked” events being generated. For instance, if we look over the series of events fired when activating a control on desktop using a keyboard and screen reader, we can see that fake mouse events are triggered. Assistive technologies do this because, historically, a lot of web content has been coded to work for mouse users, but not necessarily for keyboard users, making a simulation of those interactions necessary for some functionalities.

Similarly, when activating “Full Keyboard Support” in iOS’s Settings → Accessibility → Keyboard, it’s possible for users to navigate web content using an external bluetooth keyboard, just as they would on desktop. But if we look at the event sequence for mobile/tablet devices and paired keyboard/mouse, that situation produces pointer events, touch events, and fallback mouse events — the same sequence we’d get for a touchscreen interaction.

Showing iOS settings with Full Keyboard Access enabled on the left and an iPhone browser window open to the right with the What Input tool.
When enabled, iOS’s “Full Keyboard Access” setting results in pointer, touch, and mouse events. What Input? identifies this as a touch input

In all these situations, scripts like What Input? will — understandably, through no fault of its own — misidentify the current input type.

Incorrect assumptions that can completely break the experience

Having outlined the complexity of multi-input devices, it should be clear by now that approaches that only listen to specific types of events, like the form of “touch detection” we see commonly in use, quickly fall apart.

if (window.matchMedia && window.matchMedia("(pointer: coarse)").matches) {
  /* if the pointer is coarse, listen to touch events */
  target.addEventListener("touchstart", ...);
  // ...
} else {
  /* otherwise, listen to mouse and keyboard events */
  target.addEventListener("click", ...);
  // ...

In the case of a “touch” device with additional inputs — such as a mobile or tablet with an external mouse — this code will essentially prevent the user from being able to use anything other than their touchscreen. And on devices that are primarily mouse-driven but do have a secondary touchscreen interface — like a Microsoft Surface — the user will be unable to use their touchscreen.

Instead of thinking about this as “touch or mouse/keyboard,” realize that it’s often a case of “touch and mouse/keyboard.” If we only want to register touch events when there’s an actual touchscreen device for performance reasons, we can try detecting any-pointer: coarse. But we should also keep other regular event listeners for mouse and keyboard.

/* always, as a matter of course, listen to mouse and keyboard events */
target.addEventListener("click", ...);
 // ...

if (window.matchMedia && window.matchMedia("(any-pointer: coarse)").matches) {
  /* if there's a coarse pointer, *also* listen to touch events */
  target.addEventListener("touchstart", ...);
  // ...

Alternatively, we could avoid this entire conundrum about different types of events by using pointer events, which cover all types of pointer inputs in a single, unified event model, and are fairly well supported.

Give users an explicit choice

One potential solution for neatly circumventing our inability to make absolute determinations about which type of input the users are using may be to use the information provided by media queries and tools like What Input?, not to immediately switch between different layouts/functionalities — or worse, to only listen to particular types of events, and potentially locking out any additional input types — but to use them only as signals for when to provide users with an explicit way to switch modes.

For instance, see the way Microsoft Office lets you change between “Touch” and “Mouse” mode. On touch devices, this option is shown by default in the application’s toolbar, while on non-touch devices, it’s initially hidden (though it can be enabled, regardless of whether or not a touchscreen is present).

Screenshot of Microsoft Office's 'Touch/Mouse mode' dropdown, and a comparison of (part of) the toolbar as it's presented in each mode

A site or web application could take the same approach, and even set the default based on what the primary input is — but still allow users to explicitly change modes. And, using an approach similar to What Input?, the site could detect the first appearance of a touch-based input, and alert/prompt the user if they want to switch to a touch-friendly mode.

Potential for incorrect assumptions — query responsibly

Using Media Queries Level 4 Interaction Media Features and adapting our sites based on the characteristics of the available primary or additional pointer input is a great idea — but beware false assumptions about what these media features actually say. As with similar feature detection methods, developers need to be aware of what exactly they’re trying to detect, the limitations of that particular detection, and most importantly, consider why they are doing it — in a similar way to the problem I outlined in my article on detecting touch.

pointer and hover tell us about the capabilities of whatever the browser determines to be the primary device input. any-pointer and any-hover tell you about the capabilities of all connected inputs, and combined with information about the primary pointer input, they allow us to make educated guesses about a user’s particular device/scenario. We can use these features to inform our layout, or the type of interaction/functionality we want to offer; but don’t discount the possibility that those assumptions may be incorrect. The media queries themselves are not necessarily flawed (though the fact that most browsers seem to still have quirks and bugs adds to the potential problems). It just depends on how they’re used.

With that, I want to conclude by offering suggestions to “defend” yourself from the pitfalls of input detections.


Assume a single input type. It’s not “touch or mouse/keyboard” these days, but “touch and mouse/keyboard” — and the available input types may change at any moment, even after the initial page load.

Just go by pointer and hover. the “primary” pointer input is not necessarily the one that your users are using.

Rely on hover in general. Regardless of what hover or any-hover suggest, your users may have a pointer input that they’re currently using that is not hover-capable, and you can’t currently detect this unless it’s the primary input (since hover: none  is true if that particular input lacks hover, but any-hover: none will only ever be true if none of the inputs are hover-capable). And remember that hover-based interfaces generally don’t work for keyboard users.


Make your interfaces “touch-friendly.” If you detect that there’s an any-pointer:coarse input (most likely a touchscreen), consider providing large touch targets and sufficient spacing between them. Even if the user is using another input, like a mouse, at that moment, no harm done.

Give users a choice. If all else fails, consider giving the user an option/toggle to switch between touch or mouse layouts. Feel free to use any information you can glean from the media queries (such as any-pointer: coarse being true) to make an educated guess about the toggle’s initial setting.

Remember about keyboard users. Regardless of any pointer inputs that the user may or may not be using, don’t forget about keyboard accessibility — it can’t be conclusively detected, so just make sure your stuff works for keyboard users as a matter of course.

Source link