Thank You (2019 Edition) | CSS-Tricks

Thank You (2019 Edition) | CSS-Tricks

One of our yearly traditions here is to thank all y’all CSS-Tricks readers at the passing of a new year. It means a lot to me that people come here and read the words I write, and the words of all our staff and guest authors that contribute here as well.

Thank you!

Plus, we dig into the numbers this time of year. I’ve always tried to be open about the analytics on this site. Looking at them year after year always serves up a good reminder: niche blogging is a slow game. There’s no hockey-stick growth around here. Never has been, never will be. The trick is to build slowly over time, taking care of the site, investing in it, working hard, and with some luck, numbers trend upward. This year, overall traffic didn’t even do that. Sometimes you gotta fight for what you’ve got! Growth came in other areas though. Let’s take a gander.

It was January 1st, 2019 that the current design of this site (v17) debuted, so this entire year overlaps perfectly with that. I’ll certainly be tempted to release major iterations with that same timing in the future for comparison sake.

Overall numbers

Google Analytics is showing me 90.3 million pageviews, which is a bit of a decline from 2018 at over 91 million. A 1% decline. Not a big problem, but of course I’d way rather see a 1% increase instead. We’ll take that as a kick in the butt to produce a stronger year of content to hopefully more than win it back.

Looks like we published 726 articles over the year, which includes link posts and sponsored links. A good leap from 636 last year and 595 the year before that. Clearly quantity isn’t the trick to traffic for us.

I don’t know that we’ll slow down necessarily. I like the fact that we’re publishing multiple times a day with noteworthy links because I like to think of us as a timely industry publication that you can read like a daily or weekly newspaper in addition to being an evergreen reference. I don’t think we’ll invest in increasing volume, though. Quality moves the needle far more than quantity for this gang.

There is a bunch of numbers I just don’t feel like looking at this year. We’ve traditionally done stuff like what countries people are from, what browsers they use (Chrome-dominant), mobile usage (weirdly low), and things like that. This year, I just don’t care. This is a website. It’s for everyone in the world that cares to read it, in whatever country they are in and whatever browser they want to. We still track those numbers (because Google Analytics automatically does), so we can visit them again in the future and look historically if it gets interesting again. Taking a quick peak, however, it’s not much different than any other year.

Performance numbers are always fascinating. Google Analytics tells me the average page load time is 5.32s. On my fast home internet (even faster at the office), the homepage loads for me in 970ms, but it’s more like 30 seconds when throttled to “Slow 3G.” “Fast 3G” is 8 seconds. Sorta makes sense that most visitors are on faster-than-3G connections since the traffic is largely skewed toward desktop. No cache, we’re talking 54 requests (including ads) and 770KB (fits on a floppy). It’s good enough that I’m not itching to dig into a performance sprint.

Top posts of the year

You’d think we would do a section like this ever year, but because of our URL structure, I haven’t had easy access to figure this out. Fortunately, in March 2019, Jacob Worsøe helped us add some Custom Dimensions to our Google Analytics so we can track things like author and year with each pageview.

That means we can find things, like the most popular articles written in 2019, rather than just the most popular articles looked at in 2019 — regardless of when they were was written. Here’s a graph Jacob sent:

Here’s that list in text:

  1. The Great Divide
  2. Change Color of SVG on Hover
  3. New ES2018 Features Every JavaScript Developer Should Know
  4. An Introduction to Web Components
  5. Where Do You Learn HTML & CSS in 2019?
  6. The Many Ways to Change an SVG Fill on Hover (and When to Use Them)
  7. Look Ma, No Media Queries! Responsive Layouts Using CSS Grid
  8. How to Section Your HTML
  9. Prevent Page Scrolling When a Modal is Open
  10. CSS Animation Libraries

8.25% of traffic came from articles written this year. If you look at where these articles fall on the list of all URLs in 2019 (not just those published in 2019), the top article starts at #75! Hard to compete with older articles that have had time to gather SEO steam. This kind of thing makes me want to get re-focused on referential content even more.

Interesting that our top article was editorial, but everything else is referential. I like a little editorial here and there, but clearly our bread and butter is how-to technical stuff.


There are two aspects of search that are interesting to me:

  1. What do people search for right here on the site itself?
  2. What search terms do people use on Google to find this site?

On-site search is handled by Jetpack’s Elasticsearch feature, which I’m still quite liking (they are a sponsor, but it’s very true). This also means we can track its usage pretty easily using the analytics on my dashboard. I also installed a Search Meter plugin to track search form entries. I can look at Google searches through the SiteKit plugin, which pulls from Google Search Console.

Here are all three, with duplicates removed.

Jetpack Search Data Search Meter Search Data Google Search Data
1 amazon (?!) flexbox flexbox
2 flexbox grid css grid
3 css tricks flex css tricks
4 flexbox guide animation css important
5 css grid svg css triangle
6 css flex position mailto link
7 grid guide css grid vertical align css
8 css important css css comment
9 the great divide border css shapes
10 css shapes background css background image opacity

There is a bit of a fat head of traffic here with our top 10 pages doing about 10% of traffic, which syncs up with those big searches for stuff like flexbox and grid and people landing on our great guides. If you look at our top 100 pages, that goes out to about 38% of traffic, and articles past that are about 0.1% of traffic and go down from there. So I’d say our long tail is our most valuable asset. That mass of articles, videos, snippets, threads, etc. that make up 62% of all traffic.

Social media

It’s always this time of year I realize how little social media does for our traffic and feel stupid for spending so much time on it. We pretty much only do Twitter and it accounts for 1% of the traffic to this site. We still have a Facebook page but it’s largely neglected except for auto-posting our own article links to it. I find value in Twitter, through listening in on industry conversations and having fun, but I’m going to make a concerted effort to spend less time and energy on our outgoing social media work. If something is worth tweeting for us, it should be worth blogging; and if we blog it, it can be auto-tweeted.

But by way of numbers, we went from 380k followers on @css to 430k. Solid growth there, but the rate of growth is the same every year, to the point it’s weirdly consistent.

I also picked up an Instagram account this year. Haven’t done much there, but I still like it. For us, I think each post on Instagram can represent this little opportunity to clearly explain an idea, which could ultimately turn into a nice referential book or the like someday. A paultry 1,389 followers there.


I quite like our newsletter. It’s this unique piece of writing that goes out each week and gives us a chance to say what we wanna say. It’s often a conglomeration of things we’ve posted to the site, so it’s an opportunity to stay caught up with the site, but even those internal links are posted with new commentary. Plus, we link out to other things that we may not mention on the site. And best of all, it typically has some fresh editorial that’s unique to the newsletter. The bulk of it is done by Robin, but we all chip in.

All that to say: I think it’s got a lot of potential and we’re definitely going to keep at it.

We had the biggest leap in subscribership ever this year, starting the year at 40k subscribers and ending at 65k. That’s 2.5× the biggest leap in year-over-year subscribers so far. I’d like to think that it’s because it’s a good newsletter, but also because it’s integrated into the site much better this year than it ever has been.


Oh, bittersweet comments. The bad news is that I feel like they get a little worse every year. There is more spam. People get a little nastier. I’m always teetering on the edge of just shutting them off. But then someone posts something really nice or really helpful and I’m reminded that we’re a community of developers and I love them again.

4,710 approved comments. Up quite a bit from 3,788 last year, but still down from 5,040 in 2017. Note that these are approved comments, and it’s notable that this entire year we’ve been on a system of hand-approving all comments before they go out. Last year, I estimated about half of comments make it through that, and this year I’d estimate it at more like 30-40%. So, the straight-up number of comments isn’t particularly interesting as it’s subject to our attitude on approval. Next year, I plan to have us be more strict than we’ve ever been on only approving very high-quality comments.

I’m still waiting for WordPress to swoon me with a recommitment to making commenting good again. 😉


There were a couple of weeks just in December where I literally shut down the forums. They’ve been teetering on end-of-life for years. The problem is that I don’t have time to tend to them myself, nor do I think it’s worth paying someone to do so, at least not now. Brass tacks, they don’t have any business value and I don’t extract enough other value out of them to rationalize spending time on them.

If they just sat there and were happy little forums, I’d just leave them alone, but the problem is spam. It was mostly spam toward the end, which is incredibly tedious to clean up and requires extra human work.

I’ve kicked them back on for now because I was informed about a spam-blocking plugin that apparently can do incredible work specifically for bbPress spam. Worth a shot!

Interestingly, over the year, the forums generated 7m pageviews, which is 7.6% of all traffic to the site. Sorta makes sense as they are the bulk of the site URLs and they are user-generated threads. Long tail.

Goal review

Polish this new design. Mixed feelings. But I moved the site to a private GitHub repo half-way through the year, and there have been 195 commits since then, so obviously work is getting done. I’ll be leaving this design up all of 2020 and I’d like to make a more concerted effort at polish.

Improve newsletter publishing and display. Nailed this one. In March, we moved authoring right here on the site using the new Gutenberg editor in WordPress. That means it’s easier to write while being much easier to display nicely on this site. Feels great.

☯️ Raise the bar on quality. I’m not marking it as a goal entirely met because I’m not sure we changed all that much. There was no obvious jump upward in quality, but I think we do pretty good in general and would like to see us continue to hold steady there.

Better guides. We didn’t do all that much with guides. Part of the problem is that it’s a little confusing. For one thing, we have “guides” (e.g. our guide to flexbox) which is obviously useful and doing well. Then there are “Guide Collections” (e.g. our Custom Properties Guide) which are like hand-picked and hand-ordered selections of articles. I’m not entirely sure how useful those hand-curated guides are, especially considering we also have tag pages which are more sortable. The dudes with the biggest are the hand-written articles-on-steroids types, so that’s worth the most investment.

New goals

100k on email list. That would be a jump of 35k which is more than we’ve ever done. Ambitious. Part of this is that I’m tempted to try some stuff like paid advertising to grow it, so I can get a taste for that world. Didn’t Twitter have a special card where people could subscribe right from a Tweet? Stuff like that.

Two guides. The blog-post-on-steroids kind. The flexbox one does great for us, traffic-wise, but I also really enjoy this kind of creative output. I’ll be really sad if we can’t at least get two really good ones done this year.

Have an obvious focus on how-to referential technical content. This is related to the last goal, but goes for everyday publishing. I wouldn’t be mad if every darn article we published started with “How To.”

Get on Gutenberg. The new WordPress block editor. This is our most ambitious goal. Or at least I think it is. It’s the most unknown because I literally don’t know what issues we’re going to face when turning it on for more than a decade’s worth of content that’s been authored in the classic editor. I don’t think it’s going to hurt anything. It’s more a matter of making sure:

  1. authoring posts has all the same functionality and conveniences as we have now,
  2. editing old posts doesn’t require any manual conversion work, and
  3. it feels worth doing.

But I haven’t even tried yet, so it’s a don’t-know-what-I-don’t-know situation.

Again, thanks so much!

I was thinking about how stage musicians do that thing where they thank their fans almost unfailingly. Across any genre. Even if they say hardly anything into a microphone during the performance, they will at least thank people for coming, if not absolutely gush appreciation at the crowd. It’s cliché, but it’s not disingenuous. I can imagine it’s genuinely touching to look out across a room of people that all choose to spend a slice of their lives listening to you do your thing.

I feel that way here. I can’t see you as easily as looking out over a room, but I feel it in the comments you post, the emails you send, the tweets you tagged us in, and all that. You’re spending some of your life with us and that makes me feel incredibly grateful. Cheers.


Source link

Post image

What is the best way to create this layout? : web_design

Post image

For a high school project, I am creating a website that utilizes SQL tables, PHP, CSS, and HTML. This diagram right here gets data from a SQL table and creates a timeline of how a fictional football match played out.

What is the best way to create something like this? I was looking up a lot of ideas and the only thing that came to my mind was using css tables and instead of the lines, I would use borders and crossing over empty text. What would you do?

Source link

Trey Huffine

A Recap of Frontend Development in 2019

A look back at the top events, news, and trends for frontend and web development

The world of frontend development once again evolved at a rapid pace over the past year, and this article recaps all the important events, news, and trends from 2019.

React once again claims the top library and is still growing, and jQuery is surprisingly holding at #2. Not far behind that Angular and Vue both have a strong user base of passionate developers. Svelte has received a lot of attention this past year, but it is still fighting to gain adoption.

From —

After a rather quiet year, WebAssembly received some huge news early December — it is officially recommended as a language of the web by the W3C Consortium. The World Wide Web Consortium (W3C) is the main international standards organization for the World Wide Web.

Since the announcement of WebAssembly in 2017, it has garnered heavy attention and rapid adoption. In previous years, we saw the 1.0 specification created and integration in all major browsers.

Another piece of news for WebAssembly in 2019 is the formation of the Bytecode Alliance which looks “to forge WebAssembly’s outside-the-browser future by collaborating on implementing standards and proposing new ones”.

We are still waiting for WebAssembly to truly take hold and gain mass adoption, and with each update, we get closer to that goal. There is no question that the W3C statement was a huge step to legitimize it for companies, and we need to continue to lower the barrier of entry for using WebAssembly to enable products to be more easily built with it.

2019 was the year of TypeScript. Not only has TypeScript become the defacto choice for adding data types to JS code, many developers are frequently electing to use it over vanilla JavaScript for both personal projects and at work.

In the StackOverflow Survey released early in 2019, TypeScript was tied for 2nd with Python as the most loved language, falling only behind Rust. It wouldn’t be surprising to see TypeScript climb even higher in the new survey released in early 2020.

TypeScript has consumed the web development world — both for the frontend and backend. Some developers tried to dismiss TS as a fad and thought it would go the way of Coffeescript, but TypeScript has proven to solve a core problem for JS developers and appears to only be growing in usage.

TypeScript provides web devs a better developer experience with integrations for all major text editors. JavaScript developers view TypeScript as a tool that results in fewer bugs while also being easier to read code with the types and object interfaces offering self-documentation.

It’s worth noting just how popular TypeScript has become with it passing React in NPM downloads in 2019. It also has far more downloads than competitors such as Flow and Reason.

TypeScript and React solve entirely different problems, so this isn’t meant to be a direct comparison. It is only a demonstration of the popularity of TypeScript.

TypeScript v3.0 came out in late 2018 and through 2019 it has released up to version 3.7 which includes newer ECMAScript features such as optional chaining and nullish operators as well as improvements to the type checking functionality.

Vue and Angular have passionate users, with Vue even surpassed React in GitHub stars, but when it comes adoption for personal and professional projects, React continues to hold onto a strong lead.

In late 2018, the React team introduced hooks. In 2019, hooks consumed the React world with an overwhelming majority of developers adopting them as their preferred way to manage state and the component lifecycle. Throughout the year, countless articles were written about hooks, patterns began to solidify, and the most important React packages built custom hooks to expose their library’s functionality.

Hooks provide a way to manage a component’s state and lifecycle in functional components using a simple and concise syntax. In addition, React provides the ability to build custom hooks which allows us to create reusable code and shared logic without needing to create higher-order components or use render props.

If we look at the State of JS Survey 2019, we see that React still holds the top spot:

After the huge addition of hooks in React v16.8, most of the changes afterward were relatively minor while releasing up to version 16.14 in 2019.

After the huge hooks release, the React team then shifted their focus to making the lives of developers better by providing more tools. In fact, developer experience was noted as the main theme of React Conf 2019. The keynote speaker of React Conf and manager of the React team, Tom Occhino, stated that developer experience is rooted it 3 things: low barrier to entry, high productivity, and ability to scale. Let’s take a look at what the React team released or plans to release to support this:

  • A brand new version of React DevTools
  • Brand new React performance profiler tools
  • Create React App v3
  • Testing utility updates
  • Suspense
  • Concurrent mode (upcoming)
  • CSS-in-JS used at Facebook (upcoming)
  • Progressive/selective page hydration (upcoming)
  • Accessibility a11y improvements in React core (upcoming)

The belief is that a good developer experience will also lead to a good user experience, so this is a win for everyone. Watch the talk below by Yuzhi Zheng from React Conf 2019 about the upcoming React features or this link for all the talks.

Vue might not have the most adoption (yet), but it’s hard not to claim it has the most passionate users. Vue has been noted to bring in the best parts of React and Angular while also being more simple. Another huge selling point is that it’s more open and not controlled by a large company like React (Facebook) or Angular (Google).

The biggest news for Vue is its upcoming 3.0 release with alpha expected to land at the end of Q4. In 2019, Vue 2.x only received a few updates early in the year because most efforts are being put into the v3 release.

Just because there weren’t many releases this year, doesn’t mean there wasn’t a lot happening. When Evan You released the RFC for v3, a huge debate was sparked in the community over the changes as seen on Reddit and Hacker News.

The key issue that angered Vue developers is an overhaul of the framework’s API. However, after the backlash, it was noted that the API change will be entirely additive and backward compatible with Vue 2. Depending on how the release goes, many developers claim they may consider Svelte with these proposed changes, fearing Vue is trying to be too much like React. While there are still many in the community that are concerned, the noise seems to have quieted down while they wait for the release.

Aside from the debate, Vue 3 does have some other big changes coming:

  • The composition API
  • Global mounting/configuration API change
  • Fragments
  • Time Slicing Support (experimental)
  • Multiple v-models
  • Portals
  • New custom directives API
  • Improved reactivity
  • Virtual DOM rewrite
  • Static props hoising
  • A hooks API (experimental)
  • Slots Generation optimization (separate rendering for parent & child components)
  • Better TypeScript support

The other notable Vue release this year is version 4 of the CLI which is mostly focused on updating the underlying tools.

Angular’s opinionated philosophy has helped it acquire a huge user base. Since Angular is a strongly opinionated framework, it requires devs to do things the Angular way, and it provides all the tooling necessary for its developers.

This removes many of the debates about which libraries and dependencies you should bring into the project, which is a potential issue you would find in teams building React apps. It also requires developers to write their applications in TypeScript. Since most of the choices are already determined, companies view it as a great option because it allows developers to focus on building products instead of spending their time thinking about packages.

In 2019, Angular released version 8, and it also released a new renderer/compilation pipeline known as Ivy. This biggest benefit of Ivy is smaller bundle sizes, but there are many great additional improvements it provides. Currently, Ivy is an opt-in feature until Angular 9. This article details the features released in version 8, but the notable updates are:

  • Differential Loading of Modern JavaScript
  • Opt-in Ivy Preview
  • Angular Router Backwards Compatibility
  • Improved Web Worker Bundling
  • Opt-In Usage Sharing
  • Dependency Updates

During December 2019, the Angular team has prepared the release of version 9 which looks like it will officially come out very late 2019 or early 2020. The biggest change in Angular 9 is that Ivy becomes the standard renderer. Watch the YouTube video below for more details on Angular 9.

Huge thanks to Chris Coyier for the feedback in his article. I’m including the notable changes that he mentioned.

HTML and CSS are core components to the web, and although they haven’t evolved at the breakneck pace of JavaScript, they still saw some notable improvements in 2019 that will have a huge impact on how we build applications and improve the UX for our sites.

Two of the biggest changes for HTML are native lazy loading and no-jank fluid image loading. Large images have been a pain for web performance, and we have hacked around it to better handle how we load them. With native support for lazy loading and aspect ratio recognition, we can get seamless images without needing to implement any additional functionality in JS.

For native lazy loading, just include loading="lazy" on your image:

<img src="celebration.jpg" loading="lazy" alt="..." />

For no-jank images, we can use the intrinsicsize which is behind a flag, or you can use width, height, aspect-ratio.

// intrinsicsize
<img src="image.jpg" intrinsicsize="800x600" />


// width, height, aspect-ratio<!-- HTML -->
<img src="image.jpg" width="800" height="600" />
// CSS
img, video {
aspect-ratio: attr(width) / attr(height);

For CSS, there have been many changes coming slowly but actually landed in 2019. The two most exciting updates are prefers-reduced-motion and variable fonts

It’s very important to have a strong foundation in HTML/CSS and keep up with the newest improvements. Even if you’re building your apps in React and Vue, it ultimately is converted into pure HTML and CSS in the browser.

The web should be open and usable by everyone, and the frontend world has been making this priority. After JavaScript and the web evolved so rapidly starting in 2015, patterns and frameworks are finally being solidified. Now that things are more stable, this has allowed developers to focus more on tools to localize their apps and make them more accessible, which makes the web better for everyone. We should be proud of the progress we’ve made, but there is still a long way to go.

Accessibility: “The practice of making your websites usable by as many people as possible. We traditionally think of this as being about people with disabilities, but the practice of making sites accessible also benefits other groups such as those using mobile devices, or those with slow network connections.” — MDN

Internationalization: “Design/develop your content, application, specification, and so on, in a way that ensures it will work well for, or can be easily adapted for, users from any culture, region, or language.” — W3C

Continuing with its yearly update cycle, ECMAScript (the spec that JavaScript is based on) added new features for the ES2019 release.

  • Object.fromEntries()
  • String.trimStart() and String.trimEnd()
  • Better handling of unicode in JSON.stringify
  • Array.flat()
  • Array.flatMap()
  • try/catch binding
  • Symbol.description

While ES2019 has some great updates, the upcoming ES2020 looks to have some of the most anticipated features since possibly ES6/ES2015:

  • Private class fields
  • Optional chaining — obj.field?.maybe?.exists
  • nullish coalescing — item ?? 'use this only if item is null/undefined'
  • BigInts

Flutter was released 2 years after React Native, but it has quickly been gaining ground. Flutter has nearly caught up to React Native in GitHub stars with 80.5k vs 83k and should pass it soon following the current trajectory.

Considering that Flutter doesn’t have the same developer community to leverage for growth like React Native did with React web developers, this is even more impressive. Flutter is making a case for itself to be the best cross-platform mobile framework.

To support the JavaScript ecosystem and accelerate the growth of the language, the Node.js Foundation and the JS Foundation merged to form the OpenJS Foundation. The message from the foundation is collaboration and growth under a neutral entity that now hosts 31 open source projects, including Node, jQuery, and Webpack. This move is viewed as a positive for the whole JS community and is supported by major tech companies such as Google, IBM, and Microsoft.

Node released version 12 this year which is in long term support (LTS) until April 2023. Node 12 offers a host of new features, security updates, and performance improvements. Some notable updates include native support for import/export statements, private class fields, compatibility with V8 Engine version 7.4, support for TLS 1.3, and additional diagnostic tools.

Svelte found a way to inject itself into the conversation in the already-crowded world of frontend frameworks. However, as we saw at the beginning of the article, this didn’t translate to a huge amount of practical adoption yet. The best way to summarize Svelte is “simple but powerful”. The 3 points noted on the Svelte website are:

  1. Write less code
  2. No virtual DOM
  3. Truly reactive

Svelte tries to shift the bulk of its work to the compilation step instead of doing it in the browser at runtime. Svelte has a component-based architecture that compiles down to pure HTML and vanilla JavaScript while also promising less boilerplate code. It uses reactive programming that makes direct updates to the DOM instead of diffing a virtual DOM.

Svelte offers something new and exciting to the frontend landscape… by offering less. In 2020, it will be fun to watch how Svelte grows and develops, and hopefully we’ll get some examples of it being used at scale to see how it compares its bigger competitors in React, Vue, and Angular.

With the increased utilization of frameworks like Gatsby, the rapid growth of static website hosts like Netlify, and the countless number of headless CMS companies popping up, static sites prove they are going to be an integral part of the web.

Static sites combine the old web with the new tools, libraries, and updates to provide an unmatched experience. We are able to build our sites using modern libraries like React but then compile them into static HTML pages at build time. Since all the pages are now pre-built, there is no server time required to hydrate them with data on a request — the pages can be served immediately and take advantage of being cached in CDNs across the globe allowing the content to be as close as possible to your users.

A popular programming pattern utilized with static sites is the JAMStack (JavaScript, APIs, Markup). This is a hybrid static/SPA approach where pages are served statically, but once on the client, it is treated more as a SPA that uses APIs and user interaction to evolve the state of the UI.

Static sites are one of the ways to get incredibly fast products, but they aren’t suited for all apps — another excellent choice is PWAs (progressive web apps). PWAs allow you to cache resources in the browser to make pages respond immediately and also provide offline support. In addition, they allow for background workers to provide native functionality such as push notifications.

Some people have even made the claim that PWAs may even replace native mobile apps. Wherever things end up, there is no doubt that PWAs will be a big part of how companies build products for a long time.

JavaScript fatigue has been a complaint from frontend developers for a few years now, but we’ve slowly seen it be alleviated with the incredible efforts of open source project maintainers.

Previously, if we wanted to build a SPA, we had to pull in our own dependencies with either Bower or NPM, figure out how to compile it using Browserify or Webpack, write an Express server from scratch, and maintain our apps through the onslaught of library updates.

We had a few years of pain, but now we’ve iterated our way to one of the most vibrant and developed package ecosystems. There are tools to help up us abstract away the painful parts of building applications — Create React App, the Vue CLI, the Angular CLI, Gatsby for static sites, Expo for React Native mobile apps, Next/Nuxt for SSR applications, generators to create our servers, Hasura to remove the need even write a server for GraphQL, automatically generated TypeScript types using GraphQL Code Generator, Webpack continuing to get more streamlined — there is a tool to handle the heavy lifting for almost any need we have.

Maybe now we have tooling fatigue?

GraphQL promises to solve many of the issues presented by traditional REST-based applications. Developers quickly fell in love with GraphQL, and tech companies are finally catching up in adopting it. GitHub wrote it’s newest API in GraphQL a few years ago, and many other organizations are making the change also.

A GraphQL app is data-driven instead of endpoint-driven, allowing the client to declare the exact data they need and receive a corresponding JSON response from the server. GraphQL APIs provide a schema to document all the data and their types, giving the developer full visibility into the API.

Since GraphQL APIs provide a fully typed schema, it also integrates well with applications using TypeScript. Using a tool like GraphQL Code Generator, it can read the queries in our client code and match them against the schema to provide TypeScript types that will flow through our entire application.

GraphQL downloads have more than doubled over the past year, and Apollo is starting to separate itself as the most utilized framework.

If we look at the State of JS Survey, we see that GraphQL is now the most-loved data layer tool to use, and Redux has dropped dramatically. GraphQL is on a clear trajectory for developer adoption.

The progression of web development feels like it has been on a path to unify everything under JavaScript, and this is shown in the adoption of CSS-in-JS where styles are created using JavaScript strings.

This allows us to share styles and dependencies using normal JavaScript syntax through import/export. It also simplifies dynamic styles as the CSS-in-JS component can interpolate props into its styling string. As noted previously, Facebook may even view CSS-in -JS as the future of the frontend and will be releasing their own library.

Below is an example of the classic CSS vs CSS-in-JS. To handle dynamic styles with normal CSS, you have to manage class names in the component and update it based on state/props. You would also need a CSS class for each of the variations:

// Component JS file
const MyComp = ({ isActive }) => {
const className = isActive ? 'active' : 'inactive';
return <div className={className}>HI</div>
// CSS file
.active { color: green; }
.inactive { color: red; }

With CSS-in-JS, you no longer manage CSS classes. You simply pass props to a styled component, and it handles the dynamic styling with a declarative syntax. The code is much cleaner, and we have a clearer separation of concerns for styles and React by allowing CSS to manage dynamic styling based on props. It all reads just like normal React and JavaScript code now:

const Header = styled.div`
color: ${({ isActive }) => isActive ? 'green' : 'red'};
const MyComp = ({ isActive }} => (
<Header isActive={isActive}>HI</Header>

The two leading libraries for CSS-in-JS are styled-components and emotion with emotion passing styled-components for downloads in 2019. These two libraries have created quite a bit of separation from other CSS-in-JS options and look like they will continue to grow rapidly.

Developers are passionate about their IDEs/text editors and aren’t afraid to get into arguments about why their choice is the best. However, for the frontend, developers have almost unanimously chosen VS Code as their editor. VS Code is an open source editor that offers plugins to provide an incredible developer experience.

Here is the usage of text editors according to the State of JS Survey 2019:

Webpack has become a core component of nearly all modern JavaScript toolchains and is the most-used build tool. Webpack has continued to improve both its performance and usability, making it better for developers. With version 5, Webpack focuses on the following points:

  • Improve build performance with persistent caching
  • Improving long-term caching with better algorithms and defaults
  • Clean up internal patterns without creating any breaking changes

Facebook maintains the popular testing library Jest and also Flow which is a TypeScript competitor. With a bold statement at the beginning of 2019, they opted to migrate Jest off of Flow and to TypeScript. This further shows TypeScript has become the standard choice for typed JavaScript and only looks grow in usage in 2020 and beyond.

Chrome continues to iterate quickly, rapidly adding new features to the web and developer tools. In 2019, we saw 7 stable versions released with versions 79 in beta, 80 in dev, and 81 in canary. Check the Wiki below for notable additions to Chrome over the past year.

Internet Explorer and its newer incarnation Edge have been a joke to web developers, and even worse, a terrible pain to work with. The browser constantly lagged behind in web feature implementation and is notoriously difficult to write cross-browser compatible code for. In a huge win for devs, Microsoft has elected to use Google’s open source Chromium engine. In mid-2019 this change reached the beta stage.

Facebook decided the Android JavaScript engine isn’t fast enough, so they built their own. Facebook is all-in on React Native, and this move shows they are willing to make the adjustments needed to make it work as effectively as possible on all platforms.

  • Performance continues to be the most important aspect of the web with code splitting and PWAs being further utilized.
  • WebAssembly becomes more common, sees real adoption, and is utilized in products.
  • GraphQL overtakes REST for new startups and new projects while established companies migrate to it even further.
  • TypeScript becomes the default choice for new startups and projects.
  • We start seeing real apps built without a server and on blockchain, making the web more open.
  • CSS-in-JS may become the default styling method instead of plain CSS.
  • “Codeless” apps become more popular. With improvements in AI and more layers of abstraction for apps, it’s becoming easier and easier to build applications. In 2020, we may see notable shifts toward creating apps without needing to write code.
  • Flutter may overtake React Native as the top way to build cross-platform mobile apps.
  • Svelte will see more real projects built using the technology.
  • Deno (a TypeScript runtime built by the creator of Node) sees practical usage.
  • AR/VR makes strides using libraries like A-Frame, React VR, and Google VR as well as improvements to the native AR/VR tools in the browser.
  • The influence of containerization (ie. Docker, Kubernetes) becomes more prevalent in the frontend process.

Read this article for a comprehensive list of the top programming articles of 2019 >

Addy Osmani shows us the cost of JavaScript in 2019

The StackOverflow developer survey shows us the trends in programming

See how Slack rebuilt their entire UI incrementally using modern JavaScript

The GraphQL documentary

Dan Abramov provides a thorough explanation of useEffect

Ryan Dahl gives us more insight into Deno

Eric Elliott takes the stance claiming the benefits of TypeScript aren’t worth the effort

And he also showed us how a $9/hour engineer cost Boeing billions.

The main thread is overworked and underpaid

Learn technical details of the V8 JavaScript engine, and how it created a performance cliff in React.

Flavio Copes details all the changes to JavaScript since the massive ES2015 update.

Ebay converts a key feature to WebAssembly and realizes huge gains.

Diana Adrianne builds incredible portraits using only CSS.

Rodrigo Pombo shows us how to build our own React.

Source link

Laravel Homestead With Windows 10 Step-by-Step

Laravel Homestead With Windows 10 Step-by-Step

I am going to write down a step-by-step procedure to set up the Homestead for Laravel-5.2 in Windows 10 with VirtualBox. I spent a lot of time to set up the homestead for Laravel-5.2 in my windows 10 PC. I am writing this so that anybody could get benefit from this post. Well, enough talking. Let’s dig in.

The official documentation for Laravel Homestead setup is: Official Documentation.

N.B: Please try to type all the commands instead of copy-paste from this tutorial. It may cause unexpected errors. See the response section below for more information.

Step 1

As the official documentation says, you need to enable hardware virtualization (VT-x). To do this, follow this site:

If this doesn’t help, then Google it with your laptop model number or with your PC configuration. You must enable hardware virtualization (VT-x). And If you are using Hyper-V on a UEFI system, you additionally need to disable Hyper-V in order to access VT-x.

You may also like:
The Eight Biggest Laravel Development Mistakes You Can Easily Avoid.

Step 2

Now, you need to download the latest version of VirtualBox and vagrant. 

After downloading these, first, install VirtualBox. And then install Vagrant. You may need to restart your PC after the installation complete.

Step 3

Now, we need to install git bash (if git bash is already installed in your PC, then skip this step). Download link: After downloading, install it.

Step 4

Now, open git bash in administrator mode and run the following command:

If you are now getting an error like this:

Then, download this MS Visual C++ 2010 x86 Redistributables and install it. Now, run the following command again:

It should add the Laravel/Homestead box to your Vagrant installation. It will take a few minutes to download the box, depending on your Internet connection speed.

Step 5

After completing Step 4, type cd ~ on you git bash and hit enter. Now run the following command:

It will clone the Homestead repository into a Homestead folder within your home (C:UsersUSER_NAME) directory.

Now, run the following two commands one-by-one:

This will create the Homestead.yaml configuration file. The Homestead.yaml file will be placed in the C:UsersUSER_NAME.homestead directory.

NB: (According to this #06b52c7 change, from Feb 17, 2017, the Homestead.yaml file will be now located in C:UsersUSER_NAMEHomestead folder)

Step 6

Now, we need an ssh key. To check it is already exists in your computer or not go to C:UsersUSER_NAME directory and try to find out a folder named .ssh. If it exists, go into the folder and try to find out two files named id_rsa and If the folder .ssh doesn’t exist or the folder exists but the two files named id_rsa and doesn’t exist then run the following command:

Then, the command prompt will ask you two things. You don’t need to type anything, just press enter to whatever the command prompt asks you. After finishing this command, a new .ssh folder (if already not exist) will be created with the two files named, id_rsa and, inside it.

Step 7 

Now, we are going to edit the Homestead.yaml file, which is generated in Step 5. This step is very important. Go to the C:UsersUSER_NAME.homestead directory. There, open the Homestead.yaml file with any text editor. The file will look like this:

I will explain the file step-by-step and also modify it to configure our Homestead. Let’s start.

These lines specify on which IP address our Homestead will listen (in this case, 
the maximum amount of memory it can consume (2048), how many CPUs it will use (1), and the provider (VirtualBox). 

In these lines, we are going to set up our ssh keys for Homestead. Remember we have created our ssh keys in step 6. We are going to point to those two files in our Homestead.yaml file. After editing these two lines, it will look like this:

Don’t forget to use the lowercase of your drive name (“c” instead of “C”) and forward-slash(“/”) instead of backslash(“”). See what I have written. In a natural way, we should write C:UsersUSER_NAME .ssh, right? but no, see carefully. I have written c:/Users/USER_NAME/.ssh instead of C:UsersUSER_NAME.ssh. This is the tricky part; don’t miss it. 

We will always use the lowercase of our drive name (like “c” instead of “C”) and the forward-slash(“/”) instead of backslash (“”) in our Homestead.yaml file.

Here, we are going to map a folder, which will be used by both our PC and Vagrant. just imagine a common folder where if we change anything from our Windows 10 PC, the change will be visible from vagrant (and vice versa). 

– map: ~/Code means the folder that is located in our PC and to: /home/vagrant/Code means where we will access the same folder in vagrant. Not clear yet? Well just see the lines after I change them. It will be clear after change:

See now? my PC’s e:/Homestead_Projects folder and vagrant’s /home/vagrant/Code folder are pointing to the same folder. If you change anything in /home/vagrant/Code folder, it will be reflected in the e:/Homestead_Projects folder also and vice versa. 

In my case, e:/Homestead_Projects is my project folder. In your case, use your own project folder. You can use any folder name here like /home/vagrant/ANY_FOLDER_NAME instead of /home/vagrant/Code

Don’t get confused about this one with the last discussion. these lines have nothing to do with the last discussion. I am going to explain it. This configuration says that if we hit from our browser, the vagrant will serve the site from /home/vagrant/Code/Laravel/public folder.

Yes, I know we have not created any folder named Laravel in our /home/vagrant/Code folder from Vagrant or in our e:/Homestead_Projects folder from our PC yet. We will create it later. You will find your answer in step 10. In the future, if you develop more sites, then this configuration will look like this:

One more thing — the prefix of /Laravel/public, which is /home/vagrant/Code has to be the exact match of to: /home/vagrant/Code from the last section. If you have used /home/vagrant/ANY_FOLDER_NAME to map your PC’s project folder, then here, you have to use /home/vagrant/ANY_FOLDER_NAME as the prefix of /Laravel/public, which will look like /home/vagrant/ANY_FOLDER_NAME/Laravel/publicTHIS IS IMPORTANT.

Please read “
N.B.” part of step 8 before proceed to next para.

This line will create a database in Vagrant named homestead.

After editing my Homestead.yaml file, it looks like the following:

Step 8

Now, Windows will not allow the link to be hit from the browser. We have to add this to the windows hosts file. so that if we hit from our browser, it will go to the IP address we defined in our Homestead.yaml file. For now, our defined IP address is

Go to C:WindowsSystem32driversetc folder and edit the hosts file in any text editor (text editor must have to open in administrator mode). Add the following line at the very bottom of the hosts file:

If you want to add another site, it just append here like this:

Now, is accessible from our browser. but don’t hit it yet.


this link says “Based on this article by Danny Wahl, he recommends you use one of the following: “.localhost”, “.invalid”, “.test”, or “.example”.  So, you should use “homestead.test” or something else instead of “” 

Nowadays browser forces all .dev domains to use HTTPS. You can try this. Or you can use one of the following: “.localhost”, “.invalid”, “.test”, or “.example”. 

If all this sounds like too much trouble another viable option is to switch to Firefox as your development browser.

Step 9

Now, we can start our Homestead using Vagrant by running the command vagrant up. But, to do so, we have to always run this command from C:UserUSER_NAMEHomestead directory. We can do something so that we can run vagrant boxes from anywhere using git bash.

To do so, download this file and paste it in C:UserUSER_NAME directory or in C:UserUSER_NAME directory and create a file named .bash_profile. Then, write down the following lines in the .bash_profile file:

Now, using git bash from anywhere by running the homestead up command, you can run the vagrant box. To terminate, vagrant box run the homestead halt command. You might have to restart Git bash since the .bash_profile is loaded upon start. For the first time, homestead up will take some time.

I am writing down the two commands again:

  • NB:
    1. If you are getting bash: cd: /c/Users/User Name/Homestead: No such file or directory” this kind of error then please replace the following line of .bash_profile cd ~/Homestead && vagrant $*   with 
      cd “YOUR_ACTUAL_HOMESTEAD_DERECTORY_PATH” && vagrant $* and of course restart git bash.
    2. If these command doesn’t work on git bash then please try to run these commands from CMD from now on.

Step 10

Now, we are going to create our first project named, Laravel. Your questions from seeing /home/vagrant/Code/Laravel/public in Step 7 will be clear now. Until now, we only have the /home/vagrant/Code folder. There is no folder named Laravel in the /home/vagrant/Code folder yet.

You can check your project folder on your PC that I am telling you right or wrong. In my case, the project folder on my PC is e:/Homestead_Projects. You will see that there is no folder named Laravel in your PC’s project folder. Well, we are now going to create it.

Run Homestead by using the homestead up  command. Then, run the following command:

This will log in you into Vagrant. Type ls and press enter. You will see there is only one folder named Code. Type cd Code and press enter. Now, you are in the Code folder. Type ls and press enter again, and you will see that there is nothing in this folder yet.

Now, it’s time to create our first laravel project here. Run the following command:

This command will take some time and create a laravel project in the Laravel folder. Type ls and press enter. Now, you will see there is a folder named Laravel. Go to your project folder in your PC (in my case, e:/Homestead_Projects), and you will see that there is a folder named Laravel. Now you can see that the /home/vagrant/Code folder and your project folder are actually the same folder.

Step 11

Well, everything is set now. Make sure the homestead is running. Now type in your browser and press enter. You should see the Laravel 5 welcome page now 🙂

Congratulations! Please share this article and leave a comment for any questions or feedback. 

Further Reading

Source link

Adding Dynamic And Async Functionality To JAMstack Sites — S...

Adding Dynamic And Async Functionality To JAMstack Sites — S…

Skipping servers and using the JAMstack to build and deliver websites and apps can save time, money, and headache by allowing us to deliver only static assets on a CDN. But the trade-off of ditching traditional server-based deployments means that standard approaches to dynamic, asynchronous interactions in our sites and apps aren’t available anymore.

Does that mean that JAMstack sites can’t handle dynamic interactions? Definitely not!

JAMstack sites are great for creating highly dynamic, asynchronous interactions. With some small adjustments to how we think about our code, we can create fun, immersive interactions using only static assets!

It’s increasingly common to see websites built using the JAMstack — that is, websites that can be served as static HTML files built from JavaScript, Markup, and APIs. Companies love the JAMstack because it reduces infrastructure costs, speeds up delivery, and lowers the barriers for performance and security improvements because shipping static assets removes the need for scaling servers or keeping databases highly available (which also means there are no servers or databases that can be hacked). Developers like the JAMstack because it cuts down on the complexity of getting a website live on the internet: there are no servers to manage or deploy; we can write front-end code and it just goes live, like magic.

(“Magic” in this case is automated static deployments, which are available for free from a number of companies, including Netlify, where I work.)

But if you spend a lot of time talking to developers about the JAMstack, the question of whether or not the JAMstack can handle Serious Web Applications™ will come up. After all, JAMstack sites are static sites, right? And aren’t static sites super limited in what they can do?

This is a really common misconception, and in this article we’re going to dive into where the misconception comes from, look at the capabilities of the JAMstack, and walk through several examples of using the JAMstack to build Serious Web Applications™.

JAMstack Fundamentals

Phil Hawksworth explains what JAMStack actually means and when it makes sense to use it in your projects, as well as how it affects tooling and front-end architecture. Read article →

What Makes A JAMstack Site “Static”?

Web browsers today load HTML, CSS, and JavaScript files, just like they did back in the 90s.

A JAMstack site, at its core, is a folder full of HTML, CSS, and JavaScript files.

These are “static assets”, meaning we don’t need an intermediate step to generate them (for example, PHP projects like WordPress need a server to generate the HTML on every request).

That’s the true power of the JAMstack: it doesn’t require any specialized infrastructure to work. You can run a JAMstack site on your local computer, by putting it on your preferred content delivery network (CDN), hosting it with services like GitHub Pages — you can even drag-and-drop the folder into your favorite FTP client to upload it to shared hosting.

Static Assets Don’t Necessarily Mean Static Experiences

Because JAMstack sites are made of static files, it’s easy to assume that the experience on those sites is, y’know, static. But that’s not the case!

JavaScript is capable of doing a whole lot of dynamic stuff. After all, modern JavaScript frameworks are static files after we get through the build step — and there are hundreds of examples of incredibly dynamic website experiences powered by them.

There is a common misconception that “static” means inflexible or fixed. But all that “static” really means in the context of “static sites” is that browsers don’t need any help delivering their content — they’re able to use them natively without a server handling a processing step first.

Or, put in another way:

“Static assets” does not mean static apps; it means no server required.

Can The JAMstack Do That?

If someone asks about building a new app, it’s common to see suggestions for JAMstack approaches such as Gatsby, Eleventy, Nuxt, and other similar tools. It’s equally common to see objections arise: “static site generators can’t do _______”, where _______ is something dynamic.

But — as we touched on in the previous section — JAMstack sites can handle dynamic content and interactions!

Here’s an incomplete list of things that I’ve repeatedly heard people claim the JAMstack can’t handle that it definitely can:

  • Load data asynchronously
  • Handle processing files, such as manipulating images
  • Read from and write to a database
  • Handle user authentication and protect content behind a login

In the following sections, we’ll look at how to implement each of these workflows on a JAMstack site.

If you can’t wait to see the dynamic JAMstack in action, you can check out the demos first, then come back and learn how they work.

A note about the demos:

These demos are written without any frameworks. They are only HTML, CSS, and standard JavaScript. They were built with modern browsers (e.g. Chrome, Firefox, Safari, Edge) in mind and take advantage of newer features like JavaScript modules, HTML templates, and the Fetch API. No polyfills were added, so if you’re using an unsupported browser, the demos will probably fail.

Load Data From A Third-Party API Asynchronously

“What if I need to get new data after my static files are built?”

In the JAMstack, we can take advantage of numerous asynchronous request libraries, including the built-in Fetch API, to load data using JavaScript at any point.

Demo: Search A Third-Party API From A JAMstack Site

A common scenario that requires asynchronous loading is when the content we need depends on user input. For example, if we build a search page for the Rick & Morty API, we don’t know what content to display until someone has entered a search term.

To handle that, we need to:

  1. Create a form where people can type in their search term,
  2. Listen for a form submission,
  3. Get the search term from the form submission,
  4. Send an asynchronous request to the Rick & Morty API using the search term,
  5. Display the request results on the page.

First, we need to create a form and an empty element that will contain our search results, which looks like this:

  <label for="name">Find characters by name</label>
  <input type="text" id="name" name="name" required />
  <button type="submit">Search</button>

<ul id="search-results"></ul>

Next, we need to write a function that handles form submissions. This function will:

  • Prevent the default form submission behavior
  • Get the search term from the form input
  • Use the Fetch API to send a request to the Rick & Morty API using the search term
  • Call a helper function that displays the search results on the page

We also need to add an event listener on the form for the submit event that calls our handler function.

Here’s what that code looks like altogether:

<script type="module">
 import showResults from './show-results.js';

 const form = document.querySelector('form');

 const handleSubmit = async event => {

   // get the search term from the form input
   const name = form.elements['name'].value;

   // send a request to the Rick & Morty API based on the user input
   const characters = await fetch(
     .then(response => response.json())
     .catch(error => console.error(error));

   // add the search results to the DOM

 form.addEventListener('submit', handleSubmit);

Note: to stay focused on dynamic JAMstack behaviors, we will not be discussing how utility functions like showResults are written. The code is thoroughly commented, though, so check out the source to learn how it works!

With this code in place, we can load our site in a browser and we’ll see the empty form with no results showing:

Empty search form
The empty search form (Large preview)

If we enter a character name (e.g. “rick”) and click “search”, we see a list of characters whose names contain “rick” displayed:

Search form filled with “rick” with characters named “Rick” displayed below.
We see search results after the form is filled out. (Large preview)

Hey! Did that static site just dynamically load data? Holy buckets!

You can try this out for yourself on the live demo, or check out the full source code for more details.

Handle Expensive Computing Tasks Off the User’s Device

In many apps, we need to do things that are pretty resource-intensive, such as processing an image. While some of these kinds of operations are possible using client-side JavaScript only, it’s not necessarily a great idea to make your users’ devices do all that work. If they’re on a low-powered device or trying to stretch out their last 5% of battery life, making their device do a bunch of work is probably going to be a frustrating experience for them.

So does that mean that JAMstack apps are out of luck? Not at all!

The “A” in JAMstack stands for APIs. This means we can send off that work to an API and avoid spinning our users’ computer fans up to the “hover” setting.

“But wait,” you might say. “If our app needs to do custom work, and that work requires an API, doesn’t that just mean we’re building a server?”

Thanks to the power of serverless functions, we don’t have to!

Serverless functions (also called “lambda functions”) are a sort of API without any server boilerplate required. We get to write a plain old JavaScript function, and all of the work of deploying, scaling, routing, and so on is offloaded to our serverless provider of choice.

Using serverless functions doesn’t mean there’s not a server; it just means that we don’t need to think about a server.

Serverless functions are the peanut butter to our JAMstack: they unlock a whole world of high-powered, dynamic functionality without ever asking us to deal with server code or devops.

Demo: Convert An Image To Grayscale

Let’s assume we have an app that needs to:

  • Download an image from a URL
  • Convert that image to grayscale
  • Upload the converted image to a GitHub repo

As far as I know, there’s no way to do image conversions like that entirely in the browser — and even if there was, it’s a fairly resource-intensive thing to do, so we probably don’t want to put that load on our users’ devices.

Instead, we can submit the URL to be converted to a serverless function, which will do the heavy lifting for us and send back a URL to a converted image.

For our serverless function, we’ll be using Netlify Functions. In our site’s code, we add a folder at the root level called “functions” and create a new file called “convert-image.js” inside. Then we write what’s called a handler, which is what receives and — as you may have guessed — handles requests to our serverless function.

To convert an image, it looks like this:

exports.handler = async event => {
 // only try to handle POST requests
 if (event.httpMethod !== 'POST') {
   return { statusCode: 404, body: '404 Not Found' };

 try {
   // get the image URL from the POST submission
   const { imageURL } = JSON.parse(event.body);

   // use a temporary directory to avoid intermediate file cruft
   // see
   const tmpDir = tmp.dirSync();

   const convertedPath = await convertToGrayscale(imageURL, tmpDir);

   // upload the processed image to GitHub
   const response = await uploadToGitHub(convertedPath,;

   return {
     statusCode: 200,
     body: JSON.stringify({
 } catch (error) {
   return {
     statusCode: 500,
     body: JSON.stringify(error.message),

This function does the following:

  1. Checks to make sure the request was sent using the HTTP POST method
  2. Grabs the image URL from the POST body
  3. Creates a temporary directory for storing files that will be cleaned up once the function is done executing
  4. Calls a helper function that converts the image to grayscale
  5. Calls a helper function that uploads the converted image to GitHub
  6. Returns a response object with an HTTP 200 status code and the newly uploaded image’s URL

Note: We won’t go over how the helper functions for image conversion or uploading to GitHub work, but the source code is well commented so you can see how it works.

Next, we need to add a form that will be used to submit URLs for processing and a place to show the before and after:

 <label for="imageURL">URL of an image to convert</label>
 <input type="url" name="imageURL" required />
 <button type="submit">Convert</button>

<div id="converted"></div>

Finally, we need to add an event listener to the form so we can send off the URLs to our serverless function for processing:

<script type="module">
 import showResults from './show-results.js';

 const form = document.querySelector('form');
 form.addEventListener('submit', event => {

   // get the image URL from the form
   const imageURL = form.elements['imageURL'].value;

   // send the image off for processing
   const promise = fetch('/.netlify/functions/convert-image', {
     method: 'POST',
     headers: { 'Content-Type': 'application/json' },
     body: JSON.stringify({ imageURL }),
     .then(result => result.json())
     .catch(error => console.error(error));

   // do the work to show the result on the page
   showResults(imageURL, promise);

After deploying the site (along with its new “functions” folder) to Netlify and/or starting up Netlify Dev in our CLI, we can see the form in our browser:

Empty image conversion form
An empty form that accepts an image URL (Large preview)

If we add an image URL to the form and click “convert”, we’ll see “processing…” for a moment while the conversion is happening, then we’ll see the original image and its newly created grayscale counterpart:

Form filled with an image URL, showing the original image below on the left and the converted image to the right
The image is converted from full color to grayscale. (Large preview)

Oh dang! Our JAMstack site just handled some pretty serious business and we didn’t have to think about servers once or drain our users’ batteries!

Use A Database To Store And Retrieve Entries

In many apps, we’re inevitably going to need the ability to save user input. And that means we need a database.

You may be thinking, “So that’s it, right? The jig is up? Surely a JAMstack site — which you’ve told us is just a collection of files in a folder — can’t be connected to a database!”

Au contraire.

As we saw in the previous section, serverless functions give us the ability to do all sorts of powerful things without needing to create our own servers.

Similarly, we can use database-as-a-service (DBaaS) tools (such as Fauna) to read and write to a database without having to set one up or host it ourselves.

DBaaS tools massively simplify the process of setting up databases for websites: creating a new database is as straightforward as defining the types of data we want to store. The tools automatically generate all of the code to manage create, read, update, and delete (CRUD) operations and make it available for us to use via API, so we don’t have to actually manage a database; we just get to use it.

Demo: Create a Petition Page

If we want to create a small app to collect digital signatures for a petition, we need to set up a database to store those signatures and allow the page to read them out for display.

For this demo we’ll use Fauna as our DBaaS provider. We won’t go deep into how Fauna works, but in the interest of demonstrating the small amount of effort required to set up a database, let’s list each step and click to get a ready-to-use database:

  1. Create a Fauna account at
  2. Click “create a new database”
  3. Give the database a name (e.g. “dynamic-jamstack-demos”)
  4. Click “create”
  5. Click “security” in the left-hand menu on the next page
  6. Click “new key”
  7. Change the role dropdown to “Server”
  8. Add a name for the key (e.g. “Dynamic JAMstack Demos”)
  9. Store the key somewhere secure for use with the app
  10. Click “save”
  11. Click “GraphQL” in the left-hand menu
  12. Click “import schema”
  13. Upload a file called db-schema.gql that contains the following code:
type Signature {
 name: String!

type Query {
 signatures: [Signature!]!

Once we upload the schema, our database is ready to use. (Seriously.)

Thirteen steps is a lot, but with those thirteen steps, we just got a database, a GraphQL API, automatic management of capacity, scaling, deployment, security, and more — all handled by database experts. For free. What a time to be alive!

To try it out, the “GraphQL” option in the left-hand menu gives us a GraphQL explorer with documentation on the available queries and mutations that allow us to perform CRUD operations.

Note: We won’t go into details about GraphQL queries and mutations in this post, but Eve Porcello wrote an excellent intro to sending GraphQL queries and mutations if you want a primer on how it works.

With the database ready to go, we can create a serverless function that stores new signatures in the database:

const qs = require('querystring');
const graphql = require('./util/graphql');

exports.handler = async event => {
 try {
   // get the signature from the POST data
   const { signature } = qs.parse(event.body);

   const ADD_SIGNATURE = `
     mutation($signature: String!) {
       createSignature(data: { name: $signature }) {

   // store the signature in the database
   await graphql(ADD_SIGNATURE, { signature });

   // send people back to the petition page
   return {
     statusCode: 302,
     headers: {
       Location: '/03-store-data/',
     // body is unused in 3xx codes, but required in all function responses
     body: 'redirecting...',
 } catch (error) {
   return {
     statusCode: 500,
     body: JSON.stringify(error.message),

This function does the following:

  1. Grabs the signature value from the form POST data
  2. Calls a helper function that stores the signature in the database
  3. Defines a GraphQL mutation to write to the database
  4. Sends off the mutation using a GraphQL helper function
  5. Redirects back to the page that submitted the data

Next, we need a serverless function to read out all of the signatures from the database so we can show how many people support our petition:

const graphql = require('./util/graphql');

exports.handler = async () => {
 const { signatures } = await graphql(`
   query {
     signatures {
       data {

 return {
   statusCode: 200,
   body: JSON.stringify(,

This function sends off a query and returns it.

An important note about sensitive keys and JAMstack apps:

One thing to note about this app is that we’re using serverless functions to make these calls because we need to pass a private server key to Fauna that proves we have read and write access to this database. We cannot put this key into client-side code, because that would mean anyone could find it in the source code and use it to perform CRUD operations against our database. Serverless functions are critical for keeping private keys private in JAMstack apps.

Once we have our serverless functions set up, we can add a form that submits to the function for adding a signature, an element to show existing signatures, and a little bit of JS to call the function to get signatures and put them into our display element:

<form action="/.netlify/functions/add-signature" method="POST">
 <label for="signature">Your name</label>
 <input type="text" name="signature" required />
 <button type="submit">Sign</button>

<ul class="signatures"></ul>

   .then(res => res.json())
   .then(names => {
     const signatures = document.querySelector('.signatures');

     names.forEach(({ name }) => {
       const li = document.createElement('li');
       li.innerText = name;

If we load this in the browser, we’ll see our petition form with signatures below it:

Empty petition form with a list of signatures below
An empty form that accepts a digital signature (Large preview)

Then, if we add our signature…

Petition form with a name in the field, but not submitted yet
The petition form with a name filled in (Large preview)

…and submit it, we’ll see our name appended to the bottom of the list:

Empty petition form with the new signature at the bottom of the list
The petition form clears and the new signature is added to the bottom of the list. (Large preview)

Hot diggity dog! We just wrote a full-on database-powered JAMstack app with about 75 lines of code and 7 lines of database schema!

Protect Content With User Authentication

“Okay, you’re for sure stuck this time,” you may be thinking. “There is no way a JAMstack site can handle user authentication. How the heck would that work, even?!”

I’ll tell you how it works, my friend: with our trusty serverless functions and OAuth.

OAuth is a widely-adopted standard for allowing people to give apps limited access to their account info rather than sharing their passwords. If you’ve ever logged into a service using another service (for example, “sign in with your Google account”), you’ve used OAuth before.

Note: We won’t go deep into how OAuth works, but Aaron Parecki wrote a solid overview of OAuth that covers the details and workflow.

In JAMstack apps, we can take advantage of OAuth, and the JSON Web Tokens (JWTs) that it provides us with for identifying users, to protect content and only allow logged-in users to view it.

Demo: Require Login to View Protected Content

If we need to build a site that only shows content to logged-in users, we need a few things:

  1. An identity provider that manages users and the sign-in flow
  2. UI elements to manage logging in and logging out
  3. A serverless function that checks for a logged-in user using JWTs and returns protected content if one is provided

For this example, we’ll use Netlify Identity, which gives us a really pleasant developer experience for adding authentication and provides a drop-in widget for managing login and logout actions.

To enable it:

  • Visit your Netlify dashboard
  • Choose the site that needs auth from your sites list
  • Click “identity” in the top nav
  • Click the “Enable Identity” button

We can add Netlify Identity to our site by adding markup that shows logged out content and adds an element to show protected content after logging in:

<div class="content logged-out">
  <h1>Super Secret Stuff!</h1>
  <p>? only my bestest friends can see this content</p>
  <button class="login">log in / sign up to be my best friend</button>
<div class="content logged-in">
  <div class="secret-stuff"></div>
  <button class="logout">log out</button>

This markup relies on CSS to show content based on whether the user is logged in or not. However, we can’t rely on that to actually protect the content — anyone could view the source code and steal our secrets!

Instead, we created an empty div that will contain our protected content, but we’ll need to make a request to a serverless function to actually get that content. We’ll dig into how that works shortly.

Next, we need to add code to make our login button work, load the protected content, and show it on screen:

<script src=""></script>
 const login = document.querySelector('.login');
 login.addEventListener('click', () => {;

 const logout = document.querySelector('.logout');
 logout.addEventListener('click', () => {

 netlifyIdentity.on('logout', () => {

 netlifyIdentity.on('login', async () => {

   const token = await netlifyIdentity.currentUser().jwt();

   const response = await fetch('/.netlify/functions/get-secret-content', {
     headers: {
       Authorization: `Bearer ${token}`,
   }).then(res => res.text());

   document.querySelector('.secret-stuff').innerHTML = response;

Here’s what this code does:

  1. Loads the Netlify Identity widget, which is a helper library that creates a login modal, handles the OAuth workflow with Netlify Identity, and gives our app access to the logged-in user’s info
  2. Adds an event listener to the login button that triggers the Netlify Identity login modal to open
  3. Adds an event listener to the logout button that calls the Netlify Identity logout method
  4. Adds an event handler for logging out to remove the authenticated class on logout, which hides the logged-in content and shows the logged-out content
  5. Adds an event handler for logging in that:
    1. Adds the authenticated class to show the logged-in content and hide the logged-out content
    2. Grabs the logged-in user’s JWT
    3. Calls a serverless function to load protected content, sending the JWT in the Authorization header
    4. Puts the secret content in the secret-stuff div so logged-in users can see it

Right now the serverless function we’re calling in that code doesn’t exist. Let’s create it with the following code:

exports.handler = async (_event, context) => {
 try {
   const { user } = context.clientContext;

   if (!user) throw new Error('Not Authorized');

   return {
     statusCode: 200,
     headers: {
       'Content-Type': 'text/html',
     body: `

If you can read this it means we're best friends.

Here are the secret details for my birthday party:

`, }; } catch (error) { return { statusCode: 401, body: 'Not Authorized', }; } };

This function does the following:

  1. Checks for a user in the serverless function’s context argument
  2. Throws an error if no user is found
  3. Returns secret content after ensuring that a logged-in user requested it

Netlify Functions will detect Netlify Identity JWTs in Authorization headers and automatically put that information into context — this means we can check for a valid JWTs without needing to write code to validate JWTs!

When we load this page in our browser, we’ll see the logged out page first:

Logged out view showing information about logging in or creating an account
When logged out, we can only see information about logging in. (Large preview)

If we click the button to log in, we’ll see the Netlify Identity widget:

A modal window showing sign up and login tabs with a login form displayed
The Netlify Identity Widget provides the whole login/sign up experience. (Large preview)

After logging in (or signing up), we can see the protected content:

Logged in view showing information about a birthday party
After logging in, we can see protected content. (Large preview)

Wowee! We just added user login and protected content to a JAMstack app!

What To Do Next

The JAMstack is much more than “just static sites” — we can respond to user interactions, store data, handle user authentication, and just about anything else we want to do on a modern website. And all without the need to provision, configure, or deploy a server!

What do you want to build with the JAMstack? Is there anything you’re still not convinced the JAMstack can handle? I’d love to hear about it — hit me up on Twitter or in the comments!

Smashing Editorial(dm, il)

Source link

How to Build Your Resume on npm

How to Build Your Resume on npm

Just yesterday, Ali Churcher shared a neat way to make a resume using a CSS Grid layout. Let’s build off that a bit by creating a template that we can spin up whenever we want using the command line. The cool thing about that is that you’ll be able to run it with just one command.

I know the command line can be intimidating, and yes, we’ll be working in Node.js a bit. We’ll keep things broken out into small steps to make it easier to follow along.

Like many projects, there’s a little setup involved. Start by creating an empty folder in your working directory and initialize a project using npm or Yarn.

mkdir your-project && cd "$_"

## npm
npm init

## Yarn
yarn init

Whatever name you use for “your-project” will be the name of your package in the npm registry.

The next step is to create an entry file for the application, which is index.js in this case. We also need a place to store data, so create another file called data.json. You can open those up from the command line once you create them:

touch index.js && touch data.json

Creating the command line interface

The big benefit we get from creating this app is that it gives us a semi-visual way to create a resume directly in the command line. We need a couple of things to get that going:

  • The object to store the data
  • An interactive command line interface (which we’ll build using the Inquirer.js)

Let’s start with that first one. Crack open data.json and add the following:

  "Education": [
    "Some info",
    "Less important info",
    "Etc, etc."
  "Experience": [
    "Some info",
    "Less important info",
    "Etc, etc."
  "Contact": [
    "A way to contact you"

This is just an example that defines the objects and keys that will be used for each step in the interface. You can totally modify it to suit your own needs.

That’s the first thing we needed. The second thing is the interactive interface. Inquirer.js will handle 90% of it., Feel free to read more about this package, cause you can build more advanced interfaces as you get more familiar with the ins and outs of it.

yarn add inquirer chalk

What’s that chalk thing? It’s a library that’s going to help us customize our terminal output by adding some color and styling for a better experience.

Now let’s open up index.js and paste the following code:

#!/usr/bin/env node

"use strict";

const inquirer = require("inquirer");
const chalk = require("chalk");
const data = require("./data.json");

// add response color
const response =;

const resumeOptions = {
  type: "list",
  name: "resumeOptions",
  message: "What do you want to know",
  choices: [...Object.keys(data), "Exit"]

function showResume() {
  console.log("Hello, this is my resume");

function handleResume() {
  inquirer.prompt(resumeOptions).then(answer => {
    if (answer.resumeOptions == "Exit") return;

    const options = data[`${answer.resumeOptions}`]
    if (options) {
      console.log(response(new inquirer.Separator()));
      options.forEach(info => {
        console.log(response("|   => " + info));
      console.log(response(new inquirer.Separator()));

        type: "list",
        name: "exitBack",
        message: "Go back or Exit?",
        choices: ["Back", "Exit"]
      }).then(choice => {
        if (choice.exitBack == "Back") {
        } else {
  }).catch(err => console.log('Ooops,', err))


Zoikes! That’s a big chunk of code. Let’s tear it down a bit to explain what’s going on.

At the top of the file, we are importing all of the necessary things needed to run the app and set the color styles using the chalk library. If you are interested more about colors and customization, check out chalk documentation because you can get pretty creative with things.

const inquirer = require("inquirer");
const chalk = require("chalk");
const data = require("./data.json");

// add response color
const response =;

Next thing that code is doing is creating our list of resume options. Those are what will be displayed after we type our command in terminal. We’re calling it resumeOptions so we know exactly what it does.

const resumeOptions = {
  type: "list",
  name: "resumeOptions",
  message: "What do you want to know",
  choices: [...Object.keys(data), "Exit"]

We are mostly interested in the choices field because it makes up the keys from our data object while providing us a way to “Exit” the app if we need to.

After that, we create the function showResume(), which will be our main function that runs right after launching. It shows sorta welcome message and runs our handleResume() function.

function showResume() {
  console.log("Hello, this is my resume");

OK, now for the big one: the handleResume() function. The first part is a conditional check to make sure we haven’t exited the app and to display the registered options from our data object if all is good. In other words, if the chosen option is Exit, we quit the program. Otherwise, we fetch the list of options that are available for us under the chosen key.

So, once the app has confirmed that we are not exiting, we get answer.resumeOptions which, as you may have guessed, spits out the list of sections we defined in the data.json file. The ones we defined were Education, Experience, and Contact.

That brings us to the Inquirer.js stuff. It might be easiest if we list those pieces:

Did you notice that new inquirer.Separator() function in the options output? That’s a feature of Inquirer.js that provides a visual separator between content to break things up a bit and make the interface a little easier to read.

Alright, we are showing the list of options! Now we need to let a a way to go back to the previous screen. To do so, we create another inquirer.prompt in which we’ll pass a new object, but this time with only two options: Exit and Back. It will return us the promise with answers we’ll need to handle. If the chosen option will be Back, we run handleResume() meaning we open our main screen with the options again; if we choose Exit, we quit the function.

Lastly, we will add the catch statement to catch any possible errors. Good practice. 🙂

Publishing to npm

Congrats! Try running node index.js and you should be able to test the app.

That’s great and all, but it’d be nicer to make it run without having to be in the working directly each time. This is much more straightforward than the functions we just looked at.

  1. Register an account at if you don’t have one.
  2. Add a user to your CLI by running npm adduser.
  3. Provide the username and password you used to register the npm account.
  4. Go to package.json and add following lines:
    "bin": {
      "your-package-name": "./index.js"
  5. Add a file that will show up on the app’s npm page.
  6. Publish the package.
npm publish --access=public

Anytime you update the package, you can push those to npm. Read more about npm versioning here.

npm version patch // 1.0.1
npm version minor // 1.1.0
npm version major // 2.0.0

And to push the updates to npm:

npm publish

Resume magic!

That’s it! Now you can experience the magic of typing npx your-package-name into the command line and creating your resume right there. By the way, npx is the way to run commands without installing them locally to your machine. It’s available for you automatically, if you’ve got npm installed.

This is just a simple terminal app, but understanding the logic behind the scenes will let you create amazing things and this is your first step on your way to it.

Source Code

Happy coding!

Source link