Comparing Browsers for Responsive Design

Comparing Browsers for Responsive Design

There are a number of these desktop apps where the goal is showing your site at different dimensions all at the same time. So you can, for example, be writing CSS and making sure it’s working across all the viewports in a single glance.

They are all very similar. For example, they do “event mirroring” meaning if you scroll in one window or device, then all the others do too, along with clicks, typing, etc. You can also zoom in and out to see many devices at once, just scaled down. Let’s see if we can root out any differences.


  • Windows, Mac, and Linux
  • “Solo” plan starts at $5/month and they have plans up from there

There are loads of little cool developer-focused features like:

  • Kill a port just by typing in the port number.
  • There’s a universal inspect mode but, while you can’t apply a change in DevTools that affects all windows and devices at the same time, you can at least inspect across all of them, and when you click, it activates the correct DevTools session.
  • Throttle or go offline in a click.
  • Turn off JavaScript with a click.
  • Turn on Design Mode with a click (e.g. every element has contenteditable).
  • Toggles for hiding images, turning off all styles, outlining all elements, etc.
  • Override fonts with Google Font choices.

Responsively App

  • Universal inspect mode that selects the correct DevTools context
  • The option to “Disable SSL Validation” is clever, should you run into issues with local HTTPS.
  • One-click dark mode toggle


  • Window and Mac
  • Free, with premium upgrades ($10/month). Some of the features like scroll syncing and auto refreshing are listed as premium features, which makes me thing that the free version limits them in some way.
  • Autorefresh is a neat idea. You set up a “watcher” for certain file types in certain folders, and if they change, it refreshes the page. I imagine most dev environments have some kind of style injection or hot module reloading, but having it available anyway is useful for ones that don’t.
  • There is no universal DevTools inspector, but you can open the DevTools individually and they do have a custom universal inspection tool for showing the box model dimensions of elements.
  • There’s a custom error report screen.
  • You can enable “Browsing Mode” to turn off all the fancy device stuff and just use it as a semi-regular browser.


  • Windows, Mac, and Linux
  • Free, with premium plans starting at $10/month. Signing up is going to get you a good handful onboarding emails over a week (with the option to you can opt out).
  • It has browser extensions for other browsers to pop your current tab over to Polypane
  • The universal inspect mode seems the most seamless of the bunch to me, but it doesn’t go so far propagate changes across windows and devices. Someone needs to do this! It’s does have a “Live CSS” pane that will inject additional CSS to all the open devices though, which is cool.
  • It can open devices based on breakpoints in your own CSS — and it actually works!


  • It’s on the Mac App Store for $5, but its website is offline, which makes it seem kinda dead.
  • It has zero fancy features. As the name implies, it simply shows the same site side-by-side in two columns that can be resized.


  • It’s not a separate browser app, but a browser extension. I kind of like this as I can stay in a canonical browser that I’m already comfortable with that’s getting regular updates.
  • The “breakpoints” view is a clever idea. I believe it should show your site at the breakpoints in your CSS, but, it seems broken to me. I’m not sure if this is an actively developed project. (My guess is that it is not.)


What, you want me to pick a winner?

While I was turned off a little Polypane’s hoop jumping and onboarding, I think it has the most well-considered feature set. Sizzy is close, but the interface is more cluttered in a way that doesn’t seem necessary. I admit I like how Blisk is really focused on “just look at the mobile view and then we’ll fill the rest of the space with a larger view” because that’s closer to how I actually work. (I rarely need to see a “device wall” of trivially different mobile screens.)

The fact that Responsively is free and open source is very cool, but is that sustainable? I think I feel safer digging into apps that are run as a business. The fact that I just stay in my normal browser with Re:View means I actually have the highest chance of actually using it, but it feels like a dead project at the moment so I probably won’t. So, for now, I guess I’ll have to crown Polypane.

Source link

A Newbie's Guide To The Best CSS Libraries

A Newbie’s Guide To The Best CSS Libraries

The advancement in the world of development has simplified the life of many developers. CSS allows you to create stunning designs without breaking any sweat. It reduces the extra efforts, allowing you to focus on boosting productivity.    

For newcomers, it is very important to understand the important libraries that can simplify their tasks. So here’s a list of a few CSS libraries that can help you get more control.  

1. Destyle.CSS

  • This opinionated library delivers a clean slate for HTML styling.
  • It ensures consistency across browsers.
  • One can easily reset custom margins and spacing.
  • Displays excellence with multiple styling approach.
  • It allows you to implement projects across multiple browsers.  
  • One can return line size and font height to its original state.
  • There is no need to reset web projects for different user agents styles.
  • Separate presentation and semantics.
  • For the main webpage, one can take advantage of style sheets.
  • Targets what is necessary and prevents style inspector bloat.  

2. Animate.css

  • This library adds animation to generate better impressions and interests.
  • It is customizable.
  • Used for cross-browser animations.
  • Ready-to-use library in the web projects.
  • Best for sliders, emphasis, attention guiding hints, home pages.  
  • Comes with a few utility classes in order to simplify use.
  • Has commands that are easy to understand and implement.  
  • It can smoothly specify interactions, length, and delay time of an animation.

3. Raisin.css

  • It supports the CSS grid.
  • RaisinCSS is completely open to customization.  
  • It is lightweight and very simple to use.  
  • Supports Flexbox.
  • Utility and skeleton-based tool.
  • This CSS utility library features pre-built modules.
  • Very easy to deploy.
  • This library delivers a complete suite of building tools and blocks to customize the CSS.
  • Has extensive functions of a wide array of commands like overflow, display, visibility, position, etc. 

4. CSS Wand

  • Sometimes we just need to add a simple animation, which does not require to write code.  
  • This library allows you to add simple animations like grow, shrink, rotate, etc.
  • For implementation, this library’s command just requires to be cut and pasted in the code.   
  • One can smoothly copy and paste beautiful CSS animations.
  • They can be easily customized as per one’s choice.

5. Water.css

  • It is a just-add-CSS collection of styles.
  • Makes simple websites look better and nicer.  
  • One can write a simple static site with nice semantic HTML, and this library will keep the styling in check.
  • It is an amazing tool to add simple CSS functions to the website.
  • Simplifies the implementation of web development elements.
  • It does not feature class and is extremely lightweight.   
  • Since there is no class, one can implement it universally.
  • It has good browser support.
  • With good quality of code, it is also quite responsive.

6. Font Awesome

  • Font Awesome is light and easy to install and use.  
  • It is a CSS library for vector icons and logos.
  • One can customize them for designs.
  • Icons come in multiple variants.
  • Provides up to 1000 free web fonts. 

7. Semantic UI

  • Has 3000+ CSS variables.
  • Has 50+ UI elements.
  • Streamlines the development process in a variety of ways.
  • Shares UI for front end development.
  • Treats classes and words as exchangeable concepts.
  • Delivers similar vocab to designers and developers and keeps the progress in sync.   
  • Developers with lesser experience can easily work with it.
  • Additional JavaScript implementation is not required.
  • Equipped with intuitive inheritance.
  • Utilizes simple phrases known as behaviors, in order to trigger functionality.   
  • Its classes are easy to understand.
  • Has high-level theming variables enabling complete designing freedom.
  • It can also be integrated with Angular and React.
  • Since it is open-source, it is one of the most popular libraries.
  • Uses human-friendly HTML to create responsive layouts.
  • Flexbox friendly.
  • For responsive design, it is created with EM values. 

Final Word!

Creating web designs is not that easy, especially when you are new and getting used to the concept. But these libraries can surely make your life a lot easier. All the seven mentioned libraries will help you focus on things that actually matter. On using them you will gradually improve your productivity and efficiency.

Source link

Google search results for, “what does a11y mean”. The first result a structured data reply, taken from Wikipedia and the article, “What Does A11Y Even Mean?” by Matt D. Smith. It’s description reads, “Accessibility is often abbreviated as the numeronym a11y, where the number 11 refers to the number of letters omitted. This parallels the abbreviations of internationalization and localization as i18n and l10n respectively.” The image comes from Matt D. Smith’s article and shows how the term “a11y” is formed from the word “accessibility.”.

a11y is web accessibility | Eric Bailey

For better or worse, I spend a decent amount of time on social media.

When you read it regularly, you start to notice that there’s an ebb and flow to the kinds of things that get brought up. People post ideas and observations, followed by reactions, counter-reactions, meta-reactions, subtweets, side and backchannel conversations, etc.

One observation, “The term ‘a11y’ isn’t very accessible.” seems to pop up like clockwork. Most of the time, I bite my tongue when I see this surface-level remark and move on.

However, it seems like I stumble across popular web personalities making this observation with increasing frequency. Maybe it’s due to the increased attention accessibility is getting in the design and development spaces. Or maybe it’s due to the filter bubble of who I follow on social media. Regardless, it does seem to be compounding to the point where it compelled me to take action.

What is a11y?

First off, we need to establish what a11y is. It is a numeronym that stands for “accessibility.” This isn’t that difficult a thing to figure out—a quick Google search gives us the answers we need. You might not even have to leave your search results page to learn its meaning:

Google search results for, “what does a11y mean”. The first result a structured data reply, taken from Wikipedia and the article, “What Does A11Y Even Mean?” by Matt D. Smith. It’s description reads, “Accessibility is often abbreviated as the numeronym a11y, where the number 11 refers to the number of letters omitted. This parallels the abbreviations of internationalization and localization as i18n and l10n respectively.” The image comes from Matt D. Smith’s article and shows how the term “a11y” is formed from the word “accessibility.”.

Numeronyms aren’t new, nor are they something foreign to the industry. To name a few:

  • 911
  • 3D
  • d11n
  • P2P
  • Y2K
  • K-9
  • l10n
  • 411
  • i18n
  • .45
  • WWII
  • G8
  • 401(k)
  • 101
  • MST3K
  • W3C
  • S3

How many of these terms are you already familiar with? I’m willing to bet a decent amount.


I don’t know the first person to use the phrase “a11y”, but Twitter’s original character count can be attributed to popularizing it.

Originally used by accessibility practitioners to save on character count when talking shop, it was further codified as a hashtag when Twitter decided to natively implement a feature that was formed from emergent user behavior—a surprisingly uncharacteristic move for the platform.

As the tweet length on Twitter was expanded to a luxurious 240 characters, the hashtag stayed on, serving as either a way to continue to save characters, or as a categorical marker to flag tweet content for others.


Thanks to the English language’s adaptable nature, different words can have different connotations depending on the overall context of the sentence they’re placed in. In the context of the categorical marker, a11y serves as disambiguation, the process of making something more clear.

If I search Twitter for a11y, my results are more focused than if I search for accessibility. I’m not getting results for courses that make foreign languages more approachable, or how access to public libraries makes research results more rich. I’m not even seeing as much content that deals with accessibility, but at a too high or generalized level.

a11y neatly sidesteps this issue, which is why we can see examples of its usage outside of Twitter:

We don’t use the red dot of light amplification by stimulated emission of radiation to harass our cats. Nor do we check what radio direction and ranging has to say about the day’s weather forecast. When I’m on vacation, I don’t look at tropical fish with the help of a self-contained underwater breathing apparatus. Those tools are lasers, radar, and scuba, three acronyms whose use has become so ubiquitous that they’ve become common nouns.

I never see people asking why WWI is written out the way it is, either. Won’t people confuse that with the first Wonder Woman movie? Or the first season of Westworld? Or some new Weight Watchers product? I also never see people questioning technical numeronyms like P2P, S3, or W3C?

So why do people focus on a11y?


The easy answer is a quick joke about a perceived irony. A more complicated one is internalized ableism.

To quote the Center for Disability Rights, ableism is “a set of beliefs or practices that devalue and discriminate against people with physical, intellectual, or psychiatric disabilities and often rests on the assumption that disabled people need to be ‘fixed’ in one form or the other.”

It’s a pervasive problem, one that’s been normalized in our discourse to the point where it’s almost invisible. Think about the last time you called something or someone, “lame,” “stupid,” “crazy,” “retarded,” or “idiotic.”

Employing more effective adjectives is great. However, removing abelist language, practices, and thoughts involves listening to individuals who are willing to spend the physical and emotional labor to tell you what needs to be reevaluated.

Thanks to the internet, that kind of information can be discovered quickly and easily, provided you know the issue exists (oh, if only there was a hashtag to help us).


I’m nearsighted. I get migraines from time to time, as well as deal with bouts of anxiety and depression. But all things considered, I am an abled, straight, white, cisgender man. So why am I the one telling you all this?

As alluded to earlier, it is not the responsibility of a minoritized group to explain itself to you. Even if these types of questions and observations are asked in entirely good faith, it’s still a Sisyphean task that places a disproportionate burden on the person responding. This isn’t great for any group, especially one that created Spoon theory.

It’s also a common abuse tactic online, where the aggressor asks someone to explain themselves in bad faith, betting on them expending time and effort. Thanks to the problem of other minds, you can never know the intentions on the other end of the conversation.

But I’m a good person!

You may be thinking this after reading the previous section. And you know what? You probably are. I do think most people strive to do their best.

If reading about Abelism made you feel uncomfortable, that’s probably a signal that your worldview, and therefore your sense of self are being challenged. And if you made it to this part of the post, great! Sit with your discomfort for a bit and question why you feel this way.

I’m willing to bet that a non-trivial amount of readers skipped down to the next section, or more realistically, closed the tab when their cognitive dissonance flared as their mental models were challenged.

When you say, “The term a11y isn’t very accessible.” are you actually saying that a term that discusses disability needs fixing?

Subconsciously or consciously, the mental models that construct our world-views can, and do affect and infringe on others. Setting aside situational attribution for a moment, we need to unpack what “good” is, in the scope of perceiving yourself to be as close to perfect as possible for what society demands of you.

The thing is: perfect is static, but the world is dynamic. Participating in culture means constantly reevaluating your position in it, and how it affects and is affected by others (see the emergence of identity-first language). This takes work. But consider the alternative.


Allyship is the “active, consistent, and arduous practice of unlearning and re-evaluating, in which a person in a position of privilege and power seeks to operate in solidarity with a marginalized group.”

You can’t be “the best ally.” It’s not a state to achieve, a switch that’s flipped, a system to rank up in, or a line item on your résumé. Importantly, it’s not a status you can lord over someone else.

Allyship is an ongoing process where trust, education, and accountability are built with minoritized groups. It takes hard work, but it is well worth the time spent.

It’s also worth saying that I’m still working at combating my own conscious and unconscious biases and prejudices, especially ones concerning disabilities. I am constantly learning and re-learning about this space, and I am fortunate to be welcomed into it.

That being said, a11y is a term created by the web worker community and organically adopted by, and rallied behind by its practitioners—both disabled and abled.

There’s a part of me that feels incredibly uncomfortable writing this post. I write and speak a lot about disability and inclusion, but I do not speak for the group or try to be its savior. To that point, I’ve been trying to incorporate more disabled voices in my work. I don’t want to dominate the conversation, nor force others out who should be present.

I also want to avoid tone policing. This one is easy: Some disabled people don’t like the term. In this case, you don’t push them to explain their position. Modify your language and behavior to accommodate their stated preference, and feel grateful that they cared enough to tell you. Easy.


I should also do my due diligence pointing out some other issues with the term a11y. Most of them have been brought up by others, but they’re worth resurfacing here.


The term “a11y” looks a lot like the word “ally.” Intentional? Perhaps. Beneficial? I think so.

If you are concerned there will confusion over what the term will be interpreted as when read, that’s a problem with your typography. If you are empowered to, take the time to update it to be more accessible.

Spell checkers

Some dictionaries will flag a11y as a misspelled word. They’ll also probably flag many other terms, technical and otherwise. That doesn’t mean they’re not words.

This topic is a whole post unto itself, but it may be worth thinking about where the words for these dictionaries are sourced, and how they contribute to how we communicate.

It’s awkward to type

If you’re using a software keyboard, switching between alphabetical and numerical keyboards takes effort. This could be especially problematic if you have a motor control disability. However, it’s still less keypresses than typing “accessibility” out in full (6 versus 13 on iOS). Also, when typed enough, many software keyboards will add it to their dictionaries for autocompletion purposes.

Furthermore, “accessibility” is a long, multisyllabic word. That can make it problematic to read or spell out, especially for people with cognitive concerns.

Screen readers

Some people mention that “a11y” may not be pronounced as expected in a screen reader or narrator? To which I say: that’s exactly the point.

A screen reader reads the screen the way the person using it has instructed it to. You don’t want to try and overwrite its verbosity and pronunciation settings, as you’d be disrupting someone’s explicit preferences and expectations. Doing so also indirectly communicates that you think that the way they interact with technology is invalid.

How do I use it?

Like a lot of accessibility work, knowing how, when, and why to use the term largely depends on being conscientious of context. It’s also good writing practice.

Many professions require good communication, and therefore good writing. Copywriters, user researchers, designers, developers, project managers, translators and localizers, marketers, etc, should all be conscious of their audience; their level of experience, their reading level, their areas of expertise, etc.

For me, a11y is largely a categorical marker. Like a signal flare, I’ll append it to tweets when I want to increase the chances of the content being noticed outside of my immediate followers.

Like any other abbreviation, I observe the Web Content Accessibility Guideline’s (WCAG) Success Criterion 3.1.4. Like any other acronym or industry jargon, I spell out the term in full the first time it appears in my writing, then follow it up with the acronym it represents:

Accessibility (<abbr>a11y</abbr>)

As it is industry jargon, I try to be aware of the context and known level of cognition my writing will ultimately wind up in. If it is for peers, the term might be used casually, alongside other jargon like AT, DOM, AA, JSON, etc.

How do I not use it?

If it’s for an audience who is new to working on the web, including learning about accessibility, I will probably not use the term until they feel more acquainted with the space.

I also don’t try to slap it in to replace the term “accessibility.” Sentences like, “Make the best a11y on your website with these 5 tips” feel forced and artificial.

a11y is here to stay

The genie is out of the bottle. The ship has sailed. The egg is scrambled.

If you’re a popular personality on social media, it’s worth some self-reflection about the ripple effect of making this observation about a11y’s perceived obtuseness.

By leaping to score some quick clever points, you’re also signaling that some negative behaviors are acceptable to model: namely gatekeeping, scapegoating, and most importantly, denying the self-disclosed identity and viewpoints of a minoritized group.

I’m less concerned about this as your private opinion voiced publicly than I am about what your legions of followers will think and do after reading it.

There’s already enough horrible misconceptions about disability out there, we don’t need any more. If you’re reaction here is to think, “Well, you’re the one gatekeeping in telling me what I can’t say!” I’d ask you to reread the “But I’m a good person!” section.

I’m not naïve enough to think this will close discussion on the topic, but I do hope this article is something you can send as a reply to the next person who gets all fired up to make this kind of tiresomely pithy observation.

If you’ve been sent this post, why not turn your energies towards something more constructive? Auditing your site for accessibility issues is a good place to start. Even better: use that time to read, and to listen.

Further reading

Source link

Opening XCode

Brand New Flutter Apple Store Publishing and TestFlight Proc…

In that article we will learn how to publish to Apple Store via using TestFlight for new Flutter project.

Before we start, i assume that you had been coded and tested successful your Flutter app and ready to share with other IOS users.

First of all you need to create a IOS Developer Account via that link: 

It is necessary to become a paid member of the Apple Store with your Apple Developer Account, estimated prices is $ 99 per year.

Now we are ready to start.

1) I assumed that you had already the XCode IDE and Flutter extensions in your MAC OS device. If you don’t have please click here to download it free. ( )

2) Now open your project from XCode as below.

Opening XCode

3) From the Runner option select generic device

Selecting runner

4) Go back to Android Studio and set the Flutter SDK path in the terminal

Setting Flutter SDK path

5) Type following command to build iOS release on the terminal and wait for until it finishes

   flutter build ios --release

6) After build is done from Android Studio successfully then go to Xcode and select the below option.

Selecting Archive in XCode

7) After building, click on Distribute App option

Clicking Distribute App

8) Select method of distribution. I will select App Store Connect because i will release app via store

Selecting a distribution method

9) Click next to proceed

Uploading app

10) Click next to proceed

App Store Connection distribution options

11) Now you will able to see contents of your package

Contents of package

12) Click on the Upload button and it will display below screen if successful

Clicking on upload

13) Go to Test Flight and application should be visible there and provide all the information related to the application and click on Save button

Testing app

14) After it will get approved to send invitation to the user for testing click on the application add Individual tester  and fill required information as shown below

Adding tester's apple account

15) After beta is approved to submit application fill up all the information as shown below and click on submit for review button

Clicking submit

Source link

How to Make a Media Query-less Card Component

How to Make a Media Query-less Card Component

Fun fact: it’s possible to create responsive components without any media queries at all. Certainly, if we had container queries, those would be very useful for responsive design at the component level. But we don’t. Still, with or without container queries, we can do things to make our components surprisingly responsive. We’ll use concepts from Intrinsic Web Design, brought to us by Jen Simmons.

Let’s dive together into the use case described below, the solutions regarding the actual state of CSS, and some other tricks I’ll give you.

A responsive “Cooking Recipe” card

I recently tweeted a video and Pen of a responsive card demo I built using a recipe for pizza as an example. (It’s not important to the technology here, but I dropped the recipe at the end because it’s delicious and gluten free.)

The demo here was a first attempt based on a concept from one of Stéphanie Walter’s talks. Here is a video to show you how the card will behave:

And if you want to play with it right now, here’s the Pen.

Let’s define the responsive layout

A key to planning is knowing the actual content you are working, and the importance of those details. Not that we should be hiding content at any point, but for layout and design reasons, it’s good to know what needs to be communicated first and so forth. We’ll be displaying the same content no matter the size or shape of the layout.

Let’s imagine the content with a mobile-first mindset to help us focus on what’s most important. Then when the screen is larger, like on a desktop, we can use the additional space for things like glorious whitespace and larger typography. Usually, a little prioritization like this is enough to be sure of what content is needed for the cards at any and all viewport sizes.

Let’s take the example of a cooking recipe teaser:

In her talk, Stéphanie had already did the job and prioritized the content for our cards. Here’s what she outlined, in order of importance:

  1. Image: because it’s a recipe, you eat with your eyes!
  2. Title: to be sure what you’re going to cook.
  3. Keywords: to catch key info at the first glance.
  4. Rating info: for social proof.
  5. Short description: for the people who read.
  6. Call to action: what you expect the user to do on this card.

This may seem like a lot, but we can get all of that into a single smart card layout!

Non-scalable typography

One of the constraints with the technique I’m going to show you is that you won’t be able to get scalable typography based on container width. Scalable typography (e.g. “fluid type”) is commonly done with the with viewport width (vw) unit, which is based on the viewport, not the parent element.

So, while we might be tempted to reach for fluid type as a non-media query solution for the content in our cards, we won’t be able to use fluid type based on some percentage of the container width nor element width itself, unfortunately. That won’t stop us from our goal, however!

A quick note on “pixel perfection”

Let’s talk to both sides here…

Designers: Pixel perfect is super ideal, and we can certainly be precise at a component level. But there has to be some trade-off at the layout level. Meaning you will have to provide some variations, but allow the in-betweens to be flexible. Things shift in responsive layouts and precision at every possible screen width is a tough ask. We can still make things look great at every scale though!

Developers: You’ll have to be able to fill the gaps between the layouts that have prescribed designs to allow content to be readable and consistent between those states. As a good practice, I also recommend trying to keep as much of a natural flow as possible.

You can also read the Ahmad’s excellent article on the state of pixel perfection.

A recipe for zero media queries

Remember, what we’re striving for is not just a responsive card, but one that doesn’t rely on any media queries. It’s not that media queries should be avoided; it’s more about CSS being powerful and flexible enough for us to have other options available.

To build our responsive card, I was wondering if flexbox would be enough or if I would need to do it with CSS grid instead. Turns out flexbox in indeed enough for us this time, using the behavior and magic of the flex-wrap and flex-basis properties in CSS.

The gist of flex-wrap is that it allows elements to break onto a new line when the space for content gets too tight. You can see the difference between flex with a no-wrap value and with wrapping in this demo:

The flex-basis value of 200px is more of an instruction than a suggestion for the browser, but if the container doesn’t offer enough space for it, the elements move down onto a new line. The margin between columns even force the initial wrapping.

I used this wrapping logic to create the base of my card. Adam Argyle also used it on the following demo features four form layouts with a mere 10 lines of CSS:

In his example, Adam uses flex-basis and flex-grow (used together in flex shorthand property) )to allow the email input to take three times the space occupied by the name input or the button. When the browser estimates there is not enough rooms to display everything on the same row, the layout breaks itself into multiple lines by itself, without us having to manage the changes in media queries.

I also used clamp() function to add even more flexibility. This function is kind of magical. It allows us to resolve a min() and a max() calculation in a single function. The syntax goes like this:

clamp(MIN, VALUE, MAX)

It’s like resolving a combination of the max() and min() functions:

max(MIN, min(VAL, MAX))

You can use it for all kind of properties that cover:  <length>, <frequency>, <angle>, <time>, <percentage>, <number>, or <integer>.

The “No-Media Query Responsive Card” demo

With all of these new-fangled CSS powers, I created a flexible responsive card without any media queries. It might be best to view this demo in a new tab, or with a 0.5x option in the embed below.

Something you want to note right away is that the HTML code for the 2 cards are exactly the same, the only difference is that the first card is within a 65% wide container, and the second one within a 35% wide container. You can also play with the dimension of your window to test its responsiveness.

The important part of the code in that demo is on these selectors:

  • .recipe is the parent flex container.
  • .pizza-box is a flex item that is the container for the card image.
  • .recipe-content is a second flex item and is the container for the card content. 

Now that we know how flex-wrap works, and how flex-basis and flex-grow  influence the element sizing, we just need to quickly explain the clamp() function because I used it for responsive font sizing in place of where we may have normally reached for fluid type.

I wanted to use calc() and custom properties to calculate font sizes based on the width of the parent container, but I couldn’t find a way, as a 100% value has a different interpretation depending on the context. I kept it for the middle value of my clamp() function, but the end result was over-engineered and didn’t wind up working as I’d hoped or expected.

/* No need, really */
font-size: clamp(1.4em, calc(.5em * 2.1vw), 2.1em);

Here’s where I landed instead:

font-size: clamp(1.4em, 2.1vw, 2.1em);

That’s what I did to make the card title’s size adjust against the screen size but, like we discussed much earlier when talking about fluid type, we won’t be able to size the text by the parent container’s width.

Instead, we’re basically saying this with that one line of CSS:

I want the font-size to equal to 2.1vw (2.1% of the viewport width), but please don’t let it go below 1.4em or above 2.1em.

This maintains the title’s prioritized importance by allowing it to stay larger than the rest of the content, while keeping it readable. And, hey, it still makes grows and shrinks on the screen size!

And let’s not forget about responsive images, The content requirements say the image is the most important piece in the bunch, so we definitely need to account for it and make sure it looks great at all screen sizes. Now, you may want to do something like this and call it a day:

max-width: 100%;
height: auto;

But that’s doesnt always result in the best rendering of an image. Instead, we have the object-fit property, which not only responds to the height and width of the image’s content-box, but allows us to crop the image and control how it stretches inside the box when used with the object-position property.

img {
  max-width: 100%;
  min-height: 100%;
  width: auto;
  height: auto;
  object-fit: cover;
  object-position: 50% 50%;

As you can see, that is a lot of properties to write down. It’s mandatory because of the explicit width and height properties in the HTML <img> code. If you remove the HTML part (which I don’t recommend for performance reason) you can keep the object-* properties in CSS and remove the others.

An alternative recipe for no media queries

Another technique is to use flex-grow as a unit-based growing value, with an absurdly enormous value for flex-basis. The idea is stolen straight from the Heydon Pickering’s great “Holy Albatross” demo.

The interesting part of the code is this:

/* Container */
.recipe {
  --modifier: calc(70ch - 100%);

  display: flex;
  flex-wrap: wrap;

/* Image dimension */
.pizza-box {
  flex-grow: 3;
  flex-shrink: 1;
  flex-basis: calc(var(--modifier) * 999);

/* Text content dimension */
.recipe-content {
  flex-grow: 4;
  flex-shrink: 1;
  flex-basis: calc(var(--modifier) * 999);

Proportional dimensions are created by flex-grow while the flex-basis dimension can be either invalid or extremely high. The value gets extremely high when calc(70ch - 100%), the value of  --modifier, reaches a positive value. When the values are extremely high each of them fills the space creating a column layout; when the values are invalid, they lay out inline.

The value of 70ch acts like the breakpoint in the recipe component (almost like a container query). Change it depending on your needs.

Let’s break down the ingredients once again

Here are the CSS ingredients we used for a media-query-less card component:

  • The clamp() function helps resolve a “preferred” vs. “minimum” vs. “maximum” value.
  • The flex-basis property with a negative value decides when the layout breaks into multiple lines.
  • The flex-grow property is used as a unit value for proportional growth.
  • The vw unit helps with responsive typography.
  • The  object-fit property provides finer responsiveness for the card image, as it allows us to alter the dimensions of the image without distorting it.

Going further with quantity queries

I’ve got another trick for you: we can adjust the layout depending on the number of items in the container. That’s not really a responsiveness brought by the dimension of a container, but more by the context where the content lays.

There is no actual media query for number of items. It’s a little CSS trick to reverse-count the number of items and apply style modifications accordingly.

The demo uses the following selector:

.container > :nth-last-child(n+3),
.container > :nth-last-child(n+3) ~ * {
  flex-direction: column;

Looks tricky, right? This selector allows us to apply styles from the last-child and all it’s siblings. Neat! 

Una Kravets explains this concept really well. We can translate this specific usage like this:

  • .container > :nth-last-child(n+3): The third .container element or greater from the last .container in the group.
  • .container > :nth-last-child(n+3) ~ *: The same exact thing, but selects any .container element after the last one. This helps account for any other cards we add.

Hugo Giraudel’s “Selectors Explained” tool really helps translate complex selectors into plain English, if you’d like another translation of how these selectors work.

Another way to get “quantity” containers in CSS is to use binary conditions. But the syntax is not easy and seems a bit hacky. You can reach me on Twitter if you need to talk about that — or any other tricks and tips about CSS or design. pastedGraphic.png

Is this future proof?

All the techniques I presented you here can be used today in a production environment. They’re well supported and offer opportunities for graceful degradation.

Worst case scenario? Some unsupported browser, say Internet Explorer 9, won’t change the layout based on the conditions we specify, but the content will still be readable. So, it’s supported, but might not be “optimized” for the ideal experience.

Maybe one day we will finally get see the holy grail of container queries in the wild. Hopefully the Intrinsic Web Design patterns we’ve used here resonate with you and help you build flexible and “intrinsicly-responsive” components in the meantime.

Let’s get to the “rea” reason for this post… the pizza! ?

Gluten free pan pizza recipe

You can pick the toppings. The important part is the dough, and here is that:


  • 3¼ cups (455g) gluten free flour
  • 1 tablespoon, plus 1 teaspoon (29g) brown sugar
  • 2 teaspoons of kosher salt
  • 1/2 cube of yeast
  • 2½ cups (400 ml) whole almond milk
  • 4 tablespoons of melted margarine
  • 1 tablespoon of maizena


  1. Mix all the dry ingredients together.
  2. Add the liquids.
  3. Let it double size for 2 hours. I’d recommend putting a wet dish towel over your bowl where the dough is, and place the dish close to a hot area (but not too hot because we don’t want it to cook right this second).
  4. Put it in the pan with oil. Let it double size for approximately 1 hour.
  5. Cook in the oven at 250 degrees for 20 minutes.

Thanks Stéphanie for the recipe ?

Source link

The A-Z of Web Scraping in 2020 [A How-To Guide]

The A-Z of Web Scraping in 2020 [A How-To Guide]

The A-Z of Web Scraping in 2020 [A How-To Guide]

Many web sites like Twitter, YouTube, or Facebook provide an easy way to access their data through a public API. All the information that you obtained using API is both well structured and normalized. For example, it can be in the format of JSON, CSV, or XML.

3 Ways to Extract Data From Any Website

Web Scraping vs API

#1 Official API.

First of all, you should always check out if there’s an official API that you can use to get the desired data.

Sometimes the official API is not updated accurately, or some of the data are missing from it.

#2 “Hidden API”.

The backend might generate data in JSON or XML format, consumed by the frontend.

Investigating XMLHttpRequest (XHR) with a web browser inspector gives us another way to access the data. It would provide us the data in the same way as an official API would do it.

How to get this data? Let’s hunt for API endpoint!

For example, let’s look at resource showing local COVID-19 cases for website visitors.

  1. Call Chrome DevTools by pressing Ctrl+Shift+I
  2. Once the console appears, go to the “Network” tab.
  3. Let’s select the XHR filter to catch an API endpoint as the “XHR” request if it is available.”
  4. Make sure the “recording” button is enabled.
  5. Refresh the webpage.
  6. Click Stop “recording” when you see the data related content has already appeared on the webpage.

Learn how to find out API endpoint using Chrome DevTools

Now you can see a list of requests on the left. Investigate them. The preview tab shows an array of values for the item named "v1."

Press the “Headers” tab to see details of the request. The most important thing for us is the URL.  Request URL for "v1" is
Now, let’s just open that URL as another browser tab to see what happens.

Cool! That’s what we’re looking for.

Taking data either directly from an API or using the technique described above is the easiest way to download datasets from websites. What to do if owners of a web site don’t grant access to their users through API?

Of course, theses approaches are not going to be useful for all the websites, and that is why web scraping libraries are still necessary.

#3 Website scraping.

What Is Web Scraping?

According to Wikipedia:
 “Web scraping,
web harvesting, or
web data extraction is data scraping used for extracting data from websites.”

Web data extraction or web scraping is the only way to get desired data if owners of a web site don’t grant access to their users through API. Web Scraping is the data extraction technique that substitutes manual repetitive typing or copy-pasting.

Know the Rules!

What should you check before scraping a website?

☑️ Robots.txt is the first thing to check when you plan to scrape website data. Robots.txt file lists the rules on how you or a bot should interact with them. You should always respect and follow all the rules listed in robots.txt.

☑️ Make sure you also look at a site’s Terms of use. If terms of use provision do not say that it limits access to bots and spiders and does not prohibit rapid requests of the server, crawling is fine.

☑️ To be compliant with the new EU General Data Protection Regulation, or GDPR, you should first evaluate your web scrapping project.

If you don’t scrape personal data, then GDPR does not apply. In this case, you can skip this section and move to the next step.

☑️ Be careful about how you use the extracted data as you may violate the copyrights sometimes. If the terms of use do not provide a limitation on a particular use of the data, anything goes so long as the crawler does not violate copyright.

Find more information: Is web scraping legal or not?


Typical websites have sitemap files containing a list of links belong to this web site. They help to make it easier for search engines to crawl web sites and index their pages. Getting URLs from sitemaps to crawl is always much faster than gathering it sequentially with a web scraper.

Render JavaScript-driven web sites

JavaScript Frameworks like Angular, React, Vue.js used widely for building modern web applications. In short, a typical web application frontend consists of  HTML + JS code + CSS Styles. Usually, source HTML initially does not contain all the actual content. During a web page download, HTML DOM elements are loaded dynamically along with rendering JavaScript code. As a result, we get rendered static HTML.

⚠️ You can do web scraping with Selenium, but it is not a good idea. Many tutorials are teaching how to use Selenium for scraping data from websites. Their home page clearly states that Selenium is for automating web applications for testing purposes.”


PhantomJS was suitable to take care of such tasks earlier, but since 2018 its development has been suspended.   

⚠️ Alternatively, Scrapinghub’s Splash was an option for Python programmers before Headless Chrome.

☑️ Your browser is a website scraper by its nature. The best way nowadays is to use Headless Chrome as it renders web pages “natively.”

Puppeteer Node library is the best choice for Javascript developers to control Chrome over DevTools Protocol.

Go developers have an option to choose from either chromedp or cdp to access Chrome via DevTools protocol.

Check out online HTML scraper that renders Javascript dynamic content in the cloud.

Be smart. Don’t let them block you.

Some web sites use anti-scraping techniques to prevent web scrapper tools from harvesting online data. Web scraping is always a “cat and mouse” game. So when building a web scraper, consider the following ways to avoid getting blocked. Or you risk not receiving the desired results.

Tip #1: Make random delays between requests.

When a human visits a web site, the speed of accessing different pages is in times less compared to a web crawler’s one. Web scraper, on the opposite, can extract several pages simultaneously in no time. Huge traffic coming to the site in a short period on time looks suspicious.

You should find out the ideal crawling speed that is individual for each website. To mimic human user behavior, you can add random delays between requests.

Don’t create excessive load for the site. Be polite to the site that you extract data from so that you can keep scraping it without getting blocked.

Tip #2: Change User-agents.

When a browser connects to a web site, it passes the User-Agent (UA) string in the HTTP header. This field identifies the browser, its version number, and a host operating system.

A typical user agent string looks like this: "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.90 Safari/537.36".

  • If multiple requests to the same domain consist of the same user-agent, the web site can detect and block you very soon.
  • Some websites block specific requests if they contain User-Agent that differ from a general browser.
  • If “user-agent” value is missed, many websites won’t allow accessing their content.

What is the solution?

You have to build a list of user-agents and rotate them randomly.

Tip #3: Rotate IP addresses. Use Proxy servers.

If you send multiple requests from the same IP address during scraping, the website considers suspicious behavior and blocks you.

For the most straightforward cases, it is enough to use the cheapest Datacenter proxies.  But some websites have advanced bot detection algorithms, so you have to use either residential or mobile proxies to scrape them.

For example, someone in Europe wants to extract data from a website with limited access to US users only. It is evident to make requests through a proxy server located in the USA since their traffic seems to be coming from the local to US IP address.

To obtain country-specific versions of target websites, just specify any arbitrary country in request parameters in Dataflow Kit HTML scraping service.

Headless Chrome as a service

Tip #4: Avoid scraping patterns. Imitate humans behavior.

Humans are not consistent while navigating a website. They do different random actions like clicks on the page and mouse movements.

In opposite, web scraping bots follow specified patterns when crawling a web site.

Teach your scraper to imitate human beings’ behavior. This way, website bot detection algorithms don’t have any reason to block you from automation your scraping tasks.

Tip #5: Keep eyes on anti-scraping tools.

One of the most frequently used tools for the detection of hacking or web scraping  attempts is the “honey pot.”  The honey pots are not visible to the human eye but can be seen by bots or web scrapers. Right after your scraper clicks such a hidden link, the site blocks you quite easily.

Find out whether a link has the "display: none" or "visibility: hidden" CSS properties set if they do just stop following that link. Otherwise, a site immediately identifies you as a bot or scraper, fingerprints the properties of your requests, and bans you.

Tip #6: Solve online CAPTCHAs.

While scraping a website on a large scale, there is a chance to be blocked by a website. Then you start seeing captcha pages instead of web pages.

CAPTCHA is a test used by websites to battle back against bots and crawlers, asking website visitors to prove they’re human before proceeding.

Many websites use reCAPTCHA from Google. The last version v3 of reCAPTCHA analyses human behavior and require them to tick "I'm not a robot" box.

CAPTCHA solving services use two methods for solving CAPTCHAs:

☑️ Human-based CAPTCHA Solving Services

When you send your CAPTCHA to such service,  human workers solve a CAPTCHA and send it back.

☑️ OCR (Optical Character Recognition) Solutions

In this case, OCR technology is used to solve CAPTCHAs automatically.

Point-and-Click Visual Selector

Of course, we don’t intend only to download and render JavaScript-driven web pages but to extract structured data from them.

Before starting of data extraction, let’s specify patterns of data. Look at the sample screenshot taken from web store selling smartphones. We want to scrape the Image, Title of an item, and its Price.

Patterns on web

Google chrome inspect tool does a great job of investigating the DOM structure of HTML web pages.

Inspect button in Google Chrome

Click the Inspect icon in the top-left corner of DevTools.

Chrome Inspector tool

With the Chrome Inspect tool, you can easily find and copy either CSS Selector or XPath of specified DOM elements on the web page.

Usually, when scraping a web page, you have more than one similar block of data to extract. Often you crawl several pages during one scraping session.

Surely, you can use Chrome Inspector to build a payload for scraping. In some complex cases, it is only a way to investigate particular element properties on a web page.

Though modern online web scrapers, in most cases, offer a more comfortable way to specify patterns (CSS Selectors or XPath) for data scraping, set up pagination rules, and rules for processing detailed pages on its way.

Look at this video to find out how it works.

Build rules for data extraction with “Point-and-Click” data selector.

— Try to build a web scraper yourself! —

Manage your Data Storage strategy.

The most well-known simple data formats for storing structured data nowadays include CSV, Excel, JSON (Lines). Extracted data may be encoded to destination format right after parsing a web page. These formats are suitable for use as low sized volumes storages.

Crawling a few pages may be easy,  but millions of pages require different approaches.

How to crawl several million pages and extract tens of million records?

What to do if the size of output data is from moderate to huge?

Choose the Right Format as Output Data

Format #1. Comma Separated Values (CSV) format

CSV is the most simple human-readable data exchange format. Each line of the file is a data record. Each record consists of an identical list of fields separated by commas.

Here is a list of families represented as CSV data:

CSV is limited to store two-dimensional, untyped data. There is no way to specify nested structures or types of values like the names of children in plain CSV.

Format #2. JSON

Representing nested structures in JSON files is easy, however.

Nowadays, JavaScript Object Notation (JSON) became a de-facto of data exchange format standard, replacing XML in most cases.

One of our projects consists of 3 Millions of parsed pages. As a result, the size of the final JSON is more than 700 Mb.

The problem arises when you have to deal with such sized JSONs. To insert or read a record from a JSON array, you need to parse the whole file every time, which is far from ideal.

Format #3. JSON Lines

Let’s look into what JSON Lines format is, and how it compares to traditional JSON. It is already common in the industry to use JSON Lines. Logstash and Docker store logs as JSON Lines.

The same list of families expressed as a JSON Lines format looks like this:

JSON Lines consists of several lines in which each line is a valid JSON object, separated by the newline character n.

Since every entry in JSON Lines is a valid JSON, you can parse every line as a standalone JSON document. For example, you can seek within it, split a 10gb file into smaller files without parsing the entire thing. You can read as many lines as needed to get the same amount of records.

A good scraping platform should:

☑️ Fetch and extract data from web pages concurrently.

We use concurrency features of Golang, and found them fantastic;

☑️ Persist extracted blocks of scraped data in the central database regularly.

This way, you don’t have to store much data in the RAM while scraping many pages. Besides, it is easy to export data to different formats several times later. We use MongoDB as our central storage.

☑️  Be web-based.

Online Website scraper is accessible anywhere from any device which can connect to the internet. Different operating systems aren’t an issue anymore. It’s all about the browser.

☑️  Be cloud-friendly.

It should provide a way to quickly scale up or down cloud capacity according to the current requirement of a web data extraction project.


In this post, I tried to explain how to scrape web pages in the year 2020.  But before considering scraping, try to find out official API exists or hunt for some “hidden” API endpoints.

I would appreciate it if you could take a minute to tell me which one of the web scraping methods you use the most in 2020. Just leave me a comment below.

Happy scrapping!

Source link