composite view of Venus
Strategy

Space Photos of the Week: Venus Is the Spacecraft-Killer


It’s a star! It’s a UFO! Nope it’s probably Venus. If you’ve ever gone outside for a late walk and spotted a big gorgeous bright “star” in the sky, it’s likely that you were looking at Venus. The planet is named after the Roman goddess of love, and NASA is giving Venus a little extra love right now: It’s in the process of evaluating two possible missions to the planet, and both of them have the potential to reshape our understanding of how terrestrial planets form, Venus in particular. Venus is covered in a thick atmosphere primarily composed of carbon dioxide gas created in part by a runaway greenhouse gas effect. Hidden below this cloud cover is the most volcanic planet in the solar system.

The two proposed Venus missions are each very different and each would accomplish something unique. The first is VERITAS or (Venus Emissivity, Radio Science, InSAR, Topography, and Spectroscopy). This orbiter would map the surface of Venus to better understand the complex features and help determine more about Venus’s plate tectonics and whether or not Venus is still geologically active. The other option is a one and done deal called DAVINCI+ or Deep Atmosphere Venus Investigation of Noble gases, Chemistry, and Imaging Plus. DAVINCI+ would drop a spherical spacecraft through the atmosphere, and during its descent the probe would collect data to help scientists better understand what the atmosphere is composed of and help complete the picture of how the planet was formed.

Sure it rains sulphuric acid and it kills all spacecraft that land there but NASA really wants to go to the planet anyway. So grab your spacesuit and get ready to show a little love to this bizarre world.

Venus looks like Mordor in this photo which is a composite of data from three different missions—two NASA projects and information from Russia’s Venera spacecraft.Photograph: JPL
Abstract art? Or ancient volcanic remnants? While it looks like the former it’s actually volcano leftovers. In 1996 NASA’s Magellan spacecraft captured this closeup photo of some complex lava flows running south towards another volcano.Photograph: NASA/JPL
NASA’s Pioneer Venus Orbiter studied the planet for over a decade and while it was there, it captured this stunning photo. Here we can see the wind patterns in the atmosphere and get some sense of just how thick the clouds are.Photograph: NASA
Most terrestrial planets have channels like the ones we see here: On Earth they are mostly formed by water, on Mars by lava flows. Venus has lava channels too. This photo shows a 360-mile-long channel, the longest on the planet.Photograph: NASA/JPL
NASA’s Magellan spacecraft captured this crater, called Stephania, seen here as white. At only 6.8 miles wide, it’s relatively small as craters go and there’s a good reason for that. Any meteors aimed at Venus get ripped to shreds as they travel through its atmosphere, so the dents they make in the surface are relatively small.Photograph: NASA/JPL

Beeline over here to look at more space photos.


More Great WIRED Stories



Source link

Post image
Strategy

Website animations : webdev


Hello all, i was wondering where can i get animations like in this website https://waaark.com/, i don’t even want them to be interactive like this website i just want static pics of animation like this.

I recall someone in this sub posting a link to a website that offered such animations for free but i couldn’t find the post.

Thanks in advance.

Post image

Like this to be more accurate



Source link

REST Requests
Strategy

Ably Masterclass, Episode 1 — Building a Realtime Voting App…


It was just a month ago that the idea of hosting a monthly masterclass series surfaced at Ably and yesterday I hosted the first episode where I taught the audience how they can build a realtime voting app in less than an hour.

So, in this post I’m summarizing what happened, along with links to some useful resources to check out.

I’ve hosted the slides online so you can check them out as well!

As this was the first episode of the series I thought it’d be best to start off with an intro to what ‘Realtime’ means and demystify some buzzwords such as Pub/Sub, persistent connections, event-driven systems etc. I’ve explained this below:

A Super Brief Intro to Realtime Concepts

Since the advent of the internet, its participating entities have communicated over the well-known Hypertext Transfer Protocol (HTTP). You have requests and responses and new connection cycles for each request-response style communication. This has worked incredibly well for applications where the primary goal is to fetch some data, send some data, or get some computation done at the server-side.

REST Requests

REST Requests

But the past couple of years have seen data sharing move to a more event-driven approach. This means that the participating entities are interested in being notified of something as soon as it has occurred. This is the kind of communication that is popularly referred to as “real-time.” Examples of such applications are live location-tracking apps, chat apps, HQ-style quiz apps, live data streams involving bitcoin prices, stock markets, and news, to name a few.

Realtime streaming examples

Realtime streaming examples
If you think about such applications, the traditional HTTP based communication using REST requests falls short. Needing to set up new connection cycles each time, client-only initiated communications, and not to mention the data overload, leads to a lot of latency – and if there’s one major thing to an application being ‘realtime’, it’s that it is capable of communicating with the least possible latency.
So if not HTTP, then perhaps
Long Polling? Well, Long Polling has almost the same disadvantages as HTTP-based communication when it comes to realtime apps. Granted, they can keep the connection open until the server has some data to return back in the response but we need something that can enable the server communicate with a client whenever it likes. In other words, a server-initiated communication. So, Long Polling isn’t an option, either.

This is where the concept of “real-time subscriptions” comes into play. There are certain realtime protocols such as WebSockets, MQTT, and SSE that allow for push-based communication (as opposed to the pull-based way of getting updates via HTTP). Let’s take a quick look at WebSockets:

WebSockets

WebSockets

Communication starts off as an HTTP request but with an additional upgrade header. If the other party is compliant with WebSockets as well, the connection is upgraded, leading to a full-duplex (communication is possible in both directions) and persistent (the connection can stay open for as long as you need) connection. This works perfectly for event-driven applications as any entity wanting to share and update simply pushes data to the other clients and they are instantly notified (latency is of the order of a few milliseconds).

This is how the concept of publishers and subscribers comes into play. Now that it’s all a bit more clear, let’s get into the app that I built during the webinar. Again, remember, I’m only doing a rapid summary of things I covered in the masterclass here. You should really be looking at the recording for detailed info on realtime concepts as well as the voting app.

Coding the Voting App From Scratch

So the end-result of the app I built looked something like so:

Final application

Final application

As you can see, these are two separate entities or apps, one a voting interface to cast votes using basic HTML buttons, and the other a chart that shows the resulting votes cast for different choices in the form of a chart. And oh did I mention? This chart updates in realtime as new votes get cast by users.

These two apps communicate in realtime via Ably, we use Fusioncharts to conveniently build a chart with our data and this whole app is publicly hosted using Glitch.

Starting with the voting interface app, it’s a basic HTML layout, uses bootstrap to look fancy and has a few lines of code to connect to an Ably channel and publish votes as they are being cast. You should check out this directly on GitHub as a blog post is not a good place to host code. But again, just to sum up, the way you initialize Ably, attach to a channel and start publishing data is just a matter of a few lines of code:

Now, on the chart app, it’s also a basic HTML layout with a placeholder <div> tag to house the chart that’ll appear as the data’s coming through. The main things that need to be done in this app are:

  1. Init Ably
  2. Attach to the voting channel
  3. Subscribe to that channel
  4. Pass on the incoming data onto the chart

And for the chart itself, the main things that need to be done are:

  1. Prepare the data in an array of objects format
  2. Specify the metadata for the chart, like the type, dimensions, captions, axes, etc
  3. Init FusionCharts and render the chart with the below attributes

As mentioned, the code shown here is just a quick example, the full source code for the voting results app can be found in the GitHub repo.

One quick note is that FusionCharts doesn’t quite take into consideration that data may change after the chart is rendered, so to enforce this, you’ll need a special method they offer called setJSONData(). This basically requires you to specify the dataSource object in the chart config specified above.

That’s basically it!

Useful Resources

  1. GitHub repo with full source code
  2. Presentation slides
  3. Recorded masterclass video
  4. Live demo of the voting interface app
  5. Live demo of the voting results app
  6. Ably Realtime docs
  7. FusionCharts docs

The next one is going to be about IoT, WebHooks, Zapier and other super fun things. You can find all the info about it on the website.

Further Reading

WebSockets vs. Long Polling

Apps Depend on Real-Time Streaming Data, Here’s How to Manage Them



Source link

Where to Learn WordPress Theme Development
Strategy

Where to Learn WordPress Theme Development


Over a decade ago, I did a little three-part video series on Designing for WordPress. Then I did other series with the same spirit, like videocasting the whole v10 redesign, a friend’s website, and even writing a book. Those are getting a little long in the tooth though. You might still learn from watching them if you’re getting into WordPress theme development, but there will be moments that feel very aged (old UI’s and old versions of software). All the code still works though, because WordPress is great at backward compatibility. I still hear from people who found those videos very helpful for them.

But since time has pressed on, and I was recently asked what resources I would suggest now, I figured I’d have a look around and see what looks good to me.

Do you like how I plopped the WordPress logo over some stock art I bought that features both a computer and a chalkboard, by which to evoke a feeling of “learning”? So good. I know.

Who are we talking to?

There’s a spectrum of WordPress developers, from people who don’t know any code at all or barely touch it, to hardcore programming nerds building custom everything.

  1. Pick out a theme that looks good, use it.
  2. ?‍♂️
  3. ?‍♂️
  4. ?‍♂️
  5. ?‍♂️
  6. Hardcore programmer nerd.

I can’t speak to anybody on either edge of that spectrum. There is this whole world of people in the middle. They can code, but they aren’t computer science people. They are get the job done people. Maybe it’s something like this:

  1. Pick out a theme that will work, use it.
  2. Start with a theme, customize it a bit using built-in tools.
  3. Start with a theme, hack it up with code to do what you need it to do.
  4. Start from scratch, build out what you need.
  5. Start from scratch, build a highly customized site.
  6. Hardcore programmer nerd.

I’ve always been somewhere around #4, and I think that’s a nice sweet spot. I try to let off-the-shelf WordPress and big popular plugins do the heavy lifting, but I’ll bring-my-own front-end (HTML, CSS, and JavaScript) and customize what I have to. I’m making templates. I’m writing queries. I’m building blocks. I’m modularizing where I can.

I feel powerful in that zone. I can build a lot of sites that way, almost by myself. So where are the resources today that help you learn this kind of WordPress theme development? Lemme see what I can find.

Wing it, old school

There is something to be said for learning by doing. Trial by fire. I’ve learned a lot under these circumstances in my life.

The trick here is to get WordPress installed on a live server and then play with the settings, plugins, customizer, and edit the theme files themselves to make the site do things. You’ll find HTML in those theme files — hack it up! You’ll see PHP code spitting out content. Can you tell what and how to manipulate it? You’ll find a CSS file in the theme — edit that sucker!

Editing a WordPress theme and seeing what happens

The official documentation can help you somewhat here:

To some degree, I’m a fan of doing it live (on a production website) because it lends a sense of realness to what you are doing when you are a beginner. The stakes are high there, giving you a sense of the power you have. When I make these changes, they are for anyone in the world with an internet connection to see.

I did this in my formative years by buying a domain name and hosting, installing WordPress on that hosting, logging into it with SFTP credentials, and literally working on the live files. I used Coda, which is still a popular app, and is being actively developed into a new version of itself as I write.

This is Nova, a MacOS code editor from Panic that has SFTP built-in.

Hopefully, the stakes are real but low. Like you’re working on a pet project or your personal site. At some point, hacking on production sites becomes too dangerous of an idea. One line of misplaced PHP syntax can take down the entire site.

If you’re working on something like a client site, you’ll need to upgrade that workflow.

Modern winging it

The modern, healthy, standard way for working on websites is:

  1. Work on them locally.
  2. Use version control (Git), where new work is done in branches of the master branch.
  3. Deployment to the production website is done when code is pushed to the master branch, like your development branch is merged in.

I’ve done a recent video on this whole workflow as I do it today. My toolset is:

  • Work locally with Local by Flywheel.
  • My web hosting is also Flywheel, but that isn’t required. It could be anything that gives you SFTP access and runs what WordPress needs: Apache, PHP, and MySQL. Disclosure, Flywheel is a sponsor here, but because I like them and their service :).
  • Code is hosted on a private repo on GitHub.
  • Deployment to the Flywheel hosting is done by Buddy. Buddy watches for pushes to the master branch and moves the files over SFTP to the production site.
Local by Flywheel

Now that you have a local setup, you can go nuts. Do whatever you want. You can’t break anything on the live site, so you’re freer to make experimental changes and just see what happens.

When working locally, it’s likely you’ll be editing files with a code editor. I’d say the most popular choice these days is the free VS Code, but there is also Atom and Sublime, and fancier editors like PhpStorm.

The freedom of hacking on files is especially apparent once you’ve pushed your code up to a Git repo. Once you’ve done that, you have the freedom of reverting files back to the state of the last push.

I use the Git software Tower, and that lets me can see what files have changed since I last committed code. If I’ve made a mistake, caused a problem, or done something I don’t like — even if I don’t remember exactly what I changed — I can discard those changes back to their last state. That’s a nice level of freedom.

When I do commit code, to master or by merging a branch into master, that’s when Buddy kicks in and deploys the changes to the production site.

CSS-Tricks itself is a WordPress site, which has continuously evolved over 13 years.

But like, where do you start?

We’re talking about WordPress theme development here, so you start with a theme. Themes are literally folders of files in your WordPress installation.

root
  - /wp-content/
    - /themes/
       - /theme-name/

WordPress comes with some themes right out of the box. As I write, the Twenty Twenty theme ships with WordPress, and it’s a nice one! You could absolutely start your theme hackin’ on that.

Themes tend to have some opinions about how they organize themselves and do things, and Twenty Twenty is no different. I’d say, perhaps controversially, that there is no one true way to organize your theme, so long as it’s valid code and does things the “WordPress” way. This is just something you’ll have to get a feel for as you make themes.

Starter themes

Starter themes were a very popular way to start building a theme from scratch in my day. I don’t have a good sense if that’s still true, but the big idea was a theme with all the basic theme templates you’ll need (single blog post pages, a homepage, a 404 page, search results page, etc.) with very little markup and no styling at all. That way you have an empty canvas from which to build out all your HTML, CSS, and JavaScript yourself to your liking. Sorta like you’re building any other site from scratch with these core technologies, only with some PHP in there spitting out content.

There was a theme called Starkers that was popular, but it’s dead now. I made one called BLANK myself but haven’t touched that in a long time. In looking around a bit, I found some newer themes with this same spirit. Here’s the best three I found:

I can’t personally vouch for them, but they’ve all been updated somewhat recently and look like pretty good starting points to me. I’d give them a shot in the case that I was starting from absolute scratch on a project. I’d be tempted to download one and then spruce it up exactly how I like it and save that as my own starter in case I needed to do it again.

It feels worth mentioning that a lot of web development isn’t starting from scratch, but rather working on existing projects. In that case, the process is still getting a local environment set up; you just aren’t starting from scratch, but with the existing theme. I’d suggest duplicating the theme and changing the name while you hack on it, so even if you deploy it, it doesn’t affect the live theme. Others might suggest using the starter as a “parent” theme, then branching off into a “child” theme.

To get your local development environment all synced up with exactly what the production website is like, I think the best tool is WP DB Migrate Pro, which can yank down the production database to your local site and all the media files (paid product and a paid add-on, worth every penny).

Fancier Starter Themes

Rather than starting from absolute scratch, there are themes that come with sensible defaults and even modern build processes for you start with. The idea is that building a site with essentially raw HTML, CSS, and JavaScript, while entirely doable, just doesn’t have enough modern conveniences to be comfortable.

Here are some.

  • Morten Rand-Hendriksen has a project called WP Rig that has all sorts of developer tools built into it. A Gulp-based build process spins up a BrowserSync server for auto updating. JavaScript gets processed in Babel. CSS gets processed in PostCSS, and code is linted. He teaches WordPress with it.
  • Roots makes a theme called Sage that comes with a templating engine, your CSS framework of choice, and fancy build process stuff.
  • Ignition has a build process and all sorts of helpers.
  • Timber comes with a templating engine and a bunch of code helpers.

I think all these are pretty cool, but are also probably not for just-starting-out beginner developers.

Books

This is tough because of how many there are. In a quick Google search, I found one site selling fifteen WordPress books as a bundle for $9.99. How would you even know where to start? How good can they be for that rock bottom price? I dunno.

I wrote a book with Jeff Starr ages ago called Digging Into WordPress. After all these years, Jeff still keeps the book up to date, so I’d say that’s a decent choice! Jeff has other books like The Tao of WordPress and WordPress Themes In Depth.

A lot of other books specifically about WordPress theme development are just fairly old. 2008-2015 stuff. Again, not that there isn’t anything to be learned there, especially as WordPress doesn’t change that rapidly, but still, I’d want to read a book more recent that half a decade old. Seems like a big opportunity for a target audience as large as WordPress users and developers. Or if there is already stuff that I’m just not finding, lemme know in the comments.

Perhaps learning is shifting so much toward online that people don’t write books as much…

Online learning courses

Our official learning partner Frontend Masters has one course on WordPress focused on JavaScript and WordPress, so that might not be quite perfect for learning the basics of theme development. Still, fascinating stuff.

Here’s some others that looked good to me while looking around:

Zac’s course looks like the most updated and perhaps the best option there.

A totally different direction for theme Development

One way to build a site with WordPress is not to use WordPress themes at all! Instead, you can use the WordPress API to suck data out of WordPress and build a site however the heck you please.

This idea of decoupling the CMS and the front end you build is pretty neat. It’s often referred to as using a “headless” CMS. It’s not for everyone. (One big reason is that, in a way, it doubles your technical debt.). But it can bring a freedom to both the CMS and the front end to evolve independently.



Source link

How to Customize the WooCommerce Cart Page on a WordPress Si...
Strategy

How to Customize the WooCommerce Cart Page on a WordPress Si…


A standard e-commerce site has a few common pages. There are product pages, shop pages that list products, and let’s not forget pages for the user account, checkout flow and cart.

WooCommerce makes it a trivial task to set these up on a WordPress site because it provides templates for them and create the pages for you right out of the box. This is what makes it easy to get your store up and running in a few minutes just by setting up some products and your payment processing details. WooCommerce is very helpful that way.

But this isn’t a post extolling the virtues of WooCommerce. Instead, let’s look at how we can customize parts of it. Specifically, I want to look at the cart. WooCommerce is super extensible in the sense that it provides a ton of filters and actions that can be hooked into, plus a way to override the templates in a WordPress theme. The problem is, those take at least some intermediate-level dev chops which may not be feasible for some folks. And, at least in my experience, the cart page tends to be the most difficult to grok and customize.

Let’s look at how to change the WooCommerce cart page by implementing a different layout. This is how a standard default cart page looks:

We’ll go for something like this instead:

Here’s what’s different:

  • We’re adopting a two-column layout instead of the single full-width layout of the default template. This allows us to bring the “Cart totals” element up top so it is more visible on larger screens.
  • We’re adding some reassurance for customers by including benefits below the list of products in the cart. This reminds the customer the value they’re getting with their purchase, like free shipping, easy exchanges, customer support and security.
  • We’re including a list of frequently asked questions beneath the list of products in an accordion format. This helps the customer get answers to questions about their purchase without have to leave and contact support.

This tutorial assumes that you have access to your theme files. If you don’t feel comfortable logging in to your hosting server and going to the file manager, I would suggest you install the plugin WP File Manager. With just the free version, you can accomplish everything explained here.

First, let’s roll our own template

One of the many benefits of WooCommerce is that it gives us pre-designed and coded templates to work with. The problem is that those template files are located in the plugin folder. And if the plugin updates in the future (which it most certainly will), any changes we make to the template will get lost. Since directly editing plugin files is a big ol’ no-no in WordPress, WooCommerce lets us modify the files by making copies of them that go in the theme folder.

It’s a good idea to use a child theme when making these sorts of changes, especially if you are using a third-party theme. That way, any changes made to the theme folder aren’t lost when theme updates are released.

To do this, we first have to locate the template we want to customize. That means going into the site’s root directory (or wherever you keep your site files if working locally, which is a great idea) and open up the /wp-content where WordPress is installed. There are several folders in there, one of which is /plugins. Open that one up and then hop over to the /woocommerce folder. That’s the main directory for all-things-WooCommerce. We want the cart.php file, which is located at:

/wp-content/plugins/woocommerce/templates/cart/cart.php

Let’s open up that file in a code editor. One of the first things you’ll notice is a series of comments on top of the file:

/**
 * Cart Page
 *
 * This template can be overridden by copying it to yourtheme/woocommerce/cart/cart.php. // highlight
 *
 * HOWEVER, on occasion WooCommerce will need to update template files and you
 * (the theme developer) will need to copy the new files to your theme to
 * maintain compatibility. We try to do this as little as possible, but it does
 * happen. When this occurs the version of the template file will be bumped and
 * the readme will list any important changes.
 *
 * @see     https://docs.woocommerce.com/document/template-structure/
 * @package WooCommerce/Templates
 * @version 3.8.0
 */

The highlighted line is exactly what we’re looking for — instructions on how to override the file! How kind of the WooCommerce team to note that up front for us.

Let’s make a copy of that file and create the file path they suggest in the theme:

/wp-content/themes/[your-theme]/woocommerce/cart/cart.php

Drop the copied file there and we’re good to start working on it.

Next, let’s add our own markup

The first two things we can tackle are the benefits and frequently asked questions we identified earlier. We want to add those to the template.

Where does our markup go? Well, to make the layout look the way we laid it out at the beginning of this post, we can start below the cart’s closing table </table> , like this:

</table>

<!-- Custom code here -->

<?php do_action( 'woocommerce_after_cart_table' ); ?>

We won’t cover the specific HTML that makes these elements. The important thing is knowing where that markup goes.

Once we’ve done that, we should end up with something like this:

Now we have all the elements we want on the page. All that’s left is to style things up so we have the two-column layout.

Alright, now we can override the CSS

We could’ve add more markup to the template to create two separate columns. But the existing markup is already organized nicely in a way that we can accomplish what we want with CSS… thanks to flexbox!

The first step involves making the .woocommerce  element a flex container. It’s the element that contains all our other elements, so it makes for a good parent. To make sure we’re only modifying it in the cart and not other pages (because other templates do indeed use this class), we should scope the styles to the cart page class, which WooCommerce also readily makes available.

.woocommerce-cart .woocommerce {
  display: flex;
}

These styles can go directly in your theme’s style.css file. That’s what WooCommerce suggests. Remember, though, that there are plenty of ways to customize styles in WordPress, some safer and more maintainable than others.

We have two child elements in the .woocommerce element, perfect for our two-column layout: .woocommerce-cart-form and .cart-collaterals. This is the CSS we need to split things up winds up looking something like this:


/* The table containing the list of products and our custom elements */
.woocommerce-cart .woocommerce-cart-form {
  flex: 1 0 70%; /* 100% at small screens; 70% on larger screens */
  margin-right: 30px;
}

/* The element that contains the cart totals */
.woocommerce-cart .cart-collaterals {
  flex: 1 0 30%; /* 100% at small screens; 30% on larger screens */
  margin-left: 30px;
}

/* Some minor tweak to make sure the cart totals fill the space */
.woocommerce-cart .cart-collaterals .cart_totals {
  width: 100%;
  padding: 0 20px 70px;
}

That gives us a pretty clean layout:

It looks more like Amazon’s cart page and other popular e-commerce stores, which is not at all a bad thing.

Best practice: Make the most important elements stand out

One of the problems I have with WooCommerce’s default designs is that all the buttons are designed the same way. They’re all the same size and same background color.

Look at all that blue!

There is no visual hierarchy on the action users should take and, as such, it’s tough to distinguish, say, how to update the cart from proceeding to checkout. The next thing we ought to do is make that distinction clearer by changing the background colors of the buttons. For that, we write the following CSS:

/* The "Apply Coupon" button */
.button[name="apply_coupon"] {
  background-color: transparent;
  color: #13aff0;
}
/* Fill the "Apply Coupon" button background color and underline it on hover */
.button[name="apply_coupon"]:hover {
  background-color: transparent;
  text-decoration: underline;
}


/* The "Update Cart" button */
.button[name="update_cart"] {
  background-color: #e2e2e2;
  color: #13aff0;
}
/* Brighten up the button on hover */
.button[name="update_cart"]:hover {
  filter: brightness(115%);
}

This way, we create the following hierarchy: 

  1. The “Proceed to checkout” is pretty much left as-is, with the default blue background color to make it stand out as it is the most important action in the cart.
  2. The “Update cart” button gets a grey background that blends in with the white background of the page. This de-prioritizes it.
  3. The “Apply coupon” is less a button and more of a text link, making it the least important action of the bunch.

The full CSS that you have to add to make this design is here:

@media(min-width: 1100px) {
  .woocommerce-cart .woocommerce {
    display: flex;
  }
  .woocommerce-cart .woocommerce-cart-form {
    flex: 1 0 70%;
    margin-right: 30px;
  }    
  .woocommerce-cart .cart-collaterals {
    flex: 1 0 30%;
    margin-left: 30px;
  }
}


.button[name="apply_coupon"] {
  background-color: transparent;
  color: #13aff0;
}


.button[name="apply_coupon"]:hover {
  text-decoration: underline;
  background-color: transparent;
  color: #13aff0;
}


.button[name="update_cart"] {
  background-color: #e2e2e2;
  color: #13aff0;
}


.button[name="update_cart"]:hover {
  background-color: #e2e2e2;
  color: #13aff0;
  filter: brightness(115%);
}

That’s a wrap!

Not too bad, right? It’s nice that WooCommerce makes itself so extensible, but without some general guidance, it might be tough to know just how much leeway you have to customize things. In this case, we saw how we can override the plugin’s cart template in a theme directory to future-proof it from future updates, and how we can override styles in our own stylesheet. We could have also looked at using WooCommerce hooks, the WooCommerce API, or even using WooCommerce conditions to streamline customizations, but perhaps those are good for another post at another time.

In the meantime, have fun customizing the e-commerce experience on your WordPress site and feel free to spend a little time in the WooCommerce docs — there are lots of goodies in there, including pre-made snippets for all sorts of things.



Source link

Sourcebit
Strategy

Data-driven JAMstack with Sourcebit | Stackbit


Sourcebit is a new open source project that aims to make it easy for developers to connect their JAMstack site to data coming from a broad range of sources. In this tutorial, we explore how to use it.

If I wanted to make a cake, I need the right ingredients – eggs, sugar and flour. However, eggs, sugar and flour are not a cake. It requires putting those ingredients together in a particular way to create a cake, using the right recipe and tools.

Similarly, a headless CMS, a static site generator and a continuous deployment service are typical ingredients in a JAMstack site. JAMstack also requires putting those ingredients together in a particular way to create a site, but, in many cases, developers were left to accomplish this without a recipe or tools. For example, how do I connect my content and assets in Contentful to my Hugo site? Or how about pulling my Sanity content into my Jekyll site?

Sourcebit is a new, MIT-licensed open source project that solves this problem by giving you both the tools and the “recipe” for building a JAMstack site that is driven by your data. In this article, I’ll go into detail about what Sourcebit is, why it is necessary and how to get started using it. It’s worth noting that Sourcebit is completely customizable and extensible via plugins, and a future post will cover those topics in more detail.

Introducing Sourcebit

Sourcebit

Sourcebit is a new open source project that aims to make it easy for developers to connect their JAMstack site to data coming from a broad range of sources. It does this by abstracting the steps for consuming data from any source:

  • Pulling the data and assets from the source;
  • Transforming that data, if needed;
  • Making the data accessible where it is needed by the static site generator. This can be as files or by calling the Sourcebit module from within the site’s code.

Within Sourcebit, each of these three steps is represented by plugin point: a source; a transformation; and a target.

So, let’s take our example from above where we want to pull content and assets from Contentful and use them locally in a Hugo site. The source plugin would be for Contentful, the transformation plugin would handle pulling assets and modifying content with the appropriate local URLs and, finally, the target plugin would be Hugo. Sourcebit will pull your content from Contentful, place it into the appropriate location in your Hugo project and then you can continue through the build and deployment process.

The best part is, Sourcebit is designed to walk you through the process of setting all that up – there’s no need to hand-code a complicated JSON or YAML configuration.

As part of the initial release, Sourcebit has pre-built plugins for Contentful and Sanity as headless CMS sources, Hugo and Jekyll as static site generators and an asset plugin to pull assets locally and transform the references in your content.

Example Site

In order to walk us through how Sourcebit works, I’ve created an example project that uses Sourcebit to populate content and assets that are pulled from Contentful. The site is intended to emulate a “fan page” for the video game Control (great game – highly recommended!). Here are some details:

You can find the source code for the project at github.com/remotesynth/control-fan-page. You can see what the finished project with the populated content looks like at control-fan-page-demo.netlify.com.

The finished project using Sourcebit

Getting Started with Sourcebit

Let’s walk through an example of how you can use Sourcebit in your project.

Interactive Command Line Configuration

Sourcebit has an interactive set up process that will generate the configuration needed to connect a data source to a local project. This makes it really easy to get started – just enter the following command into your terminal:

npx create-sourcebit

Sourcebit will start by asking you which of the available source plugins (currently Contentful, Sanity or a mock data plugin) you would like to connect. Select using the arrow key and press the spacebar on the ones you’d like to choose.

choosing a source

Next, optionally select from one of the available transformation plugins. Currently, only the assets plugin is available, which will pull assets from the source locally and replace the URLs in the content with the appropriate local URL.

choosing a transformation

Finally, select the target plugin from the available options (Jekyll or Hugo as of this writing).

choosing a target

Once the choices are made, Sourcebit will retrieve and install the necessary plugins for you and begin walking you through the steps to configure each.

Configuring the Contentful Source Plugin

First up, I need to configure the Contentful plugin. Sourcebit begins by asking for a personal access token so that it can be configured to have access to the content. You can get a Contentful personal access token here. Then it will ask which space you are working with, listing the available options. It does the same for environments, however, in my case, I only have one environment so it smartly skips that step.

Contentful options

Configuring the Asset Transformation Plugin

Next, I need to set up the assets transformation plugin. The first thing is to specify the folder within the site where assets will be saved. Sourcebit lists some common answers, or I can specify my own, which is what I choose because Hugo prefers static assets to be under the /static directory. I’ll enter static/images as the directory.

Asset options

The next question is what the relative URL to the assets will be. Sourcebit assumes the same value as the downloaded assets, but in this scenario I want to enter just /images.

Configuring the Hugo Target Plugin

It is time for me to configure how Sourcebit handles the Contentful data and saves it for Hugo. Sourcebit sees the content models from Contentful and asks me if they should be saved as pages (i.e. Markdown files), Data (i.e. JSON or YAML files) or if they should be skipped. Both of my data models represent pages.

Hugo destination options

The next step will ask a series of questions about each content model. Each step shows actual source data examples to help guide you in making the proper selections. The steps for Hugo are:

  • Whether it represents a single page or a series of pages. In my case, “About” is a single page but “Blog Post” is a series of pages.
  • Which directory the content files will be placed in. For the “About” page, it simply goes in content but blog posts will go in content/posts.
  • How I want to generate the file name. In both cases, I chose to use the content’s title field to generate the file name.
  • In the case of a collection of posts, it asks if I want to append the date to the file name. I choose no.
  • How to generate the value for the layout frontmatter field. I specify a static value for both, which is page for “About” and post for blog posts.
  • Lastly, I select which field represents the page’s content. In both of my cases, that is “body”.

You can see the entire process of configuring Hugo that I described in the short video below.

When everything is done, Sourcebit generates a Sourcebit.js file. This is the configuration I need to use to run the content pull using Sourcebit. Any sensitive information, such as the Contentful personal access token, are placed in a .env file – as such, this file should not be checked into a public repository. The final step is to run npm install to install the necessary dependencies.

Advanced Configuration

Since Sourcebit stores all of its configuration in a JavaScript file (Sourcebit.js), it allows for all kinds of additional advanced customization of its functionality using your existing code skills. For example, I could modify the writeFile function that outputs the final file content to disk and run additional code to tweak the body content before passing it on.

Basic Usage

Now that everything is configured I can use Sourcebit to populate my site. Below is a screenshot of my site before running Sourcebit. It has no content.

Unpopulated site

However, I simply need to run sourcebit fetch and all of my Contentful content gets pulled and placed into the proper location as shown below.

Note: Using sourcebit fetch in this manner requires that you have your local npmmodules on your PATH. If you do not, you will need to run ./node_modules/.bin/sourcebit fetch or follow the instructions here to update your PATH._

Sourcebit can also watch for any changes in the source (in this case, Contentful) by appending the --watch flag and immediately pull them. In the video below, I make a change and publish it to Contentful and seconds later my local site auto-refreshes with the update.

Adding Sourcebit Into Your Netlify Workflow

If you use Netlify Dev for your local development workflow, it’s easy to incorporate Sourcebit into your local development workflow.

  1. Update your package.json scripts to include a script to run Sourcebit prior to running the command to serve your project using the local webserver:

    "scripts": {
    "serve": "(sourcebit fetch --watch) & hugo serve"
    }
    
  2. Modify the netlify.toml file in your project to add a custom build command:

    [dev]
    command = "npm run serve"
    

Now whenever you run netlify dev it will pull all the content prior to serving it and set Sourcebit up to watch for content changes during development. When you are ready, the files can be pushed to your git repository and then live on Netlify.

Next Steps

Obviously, there is a lot more you can do with Sourcebit. You could use it in a similar manner with the Sanity source plugin or Jekyll target plugin. You can also use Sourcebit to configure sources and then call it as a CommonJS module from within your site code – this can be useful for incorporating it into frameworks like Next.js, for instance. You can also write your own source, transformation or destination plugins – there’s even a sample plugin to guide you. If you create one, be sure to share it with us!

Please check out the Sourcebit repository for more documentation. And when you give Sourcebit a try, please let us know what you think.



Source link

Post image
Strategy

Creating a Workout graph with frontend tech : web_design


Mods please delete if this isn’t allowed. I’ve been looking everywhere to see how I could possibly make a workout graph for a site I am trying to create but have not been able to find anything that will work.

These workouts would be for running, cycling, swimming, etc and the intervals within them. I have tried the approach of a bar graph, but it seems like the width of the bars are almost always the same width. What might be the best strategy to create something like this within the browser? Angular or Vanilla JS, or a framework/package of some other type would be preferred but I am completely willing and ready to learn anything else to make something like this.

At this link is a pretty good example of what I’m trying to do where I can select workout intensity and duration and it is displayed in the browser. Does anyone have any ideas of how this is created or what I can work on to try and make something similar myself? (note: I am not at all trying to copy this site. This site is for building workouts for a virtual cycling platform. I am trying to make workouts for the sake of viewing the intensities over time , but this has a simple free creator to show.)

Here’s an image from a different workout creation site:

Post image

So far I poked around at Google Charts and ChartJS but it seems like neither of these will have the functionality of something like this. Going to a more traditional approach, I imagine this could be done with html canvas. Am I on the right track?



Source link

Lines 3-6 are CSS files being loaded asynchronously using the preload pattern. While they aren&rsquo;t critical to initial render, the use of preload means they arrive before anything else in this case.
Strategy

When CSS Blocks – Web Performance Consulting


The other day I was auditing a site and ran into a pattern that I’ve seen with a few different clients now. The pattern itself is no longer recommended, but it’s a helpful illustration of why it’s important to be careful about how you use preload as well as a useful, real-world demonstration of how the order of your document can have a significant impact on performance (something Harry Roberts has done an outstanding job of detailing).

I’m a big fan of the Filament Group—they churn out an absurd amount of high-quality work, and they are constantly creating invaluable resources and giving them away for the betterment of the web. One of those great resources is their loadCSS project, which for the longest time, was the way I recommended folks load their non-critical CSS.

While that’s changed (and Filament Group wrote up a great post about what they prefer to do nowadays), I still find it often used in production on sites I audit.

One particular pattern I’ve seen is the preload/polyfill pattern. With this approach, you load any stylesheets as preloads instead, and then use their onload events to change them back to a stylesheet once the browser has them ready. It looks something like this:

<link rel="preload" href="path/to/mystylesheet.css" as="style" onload="this.rel='stylesheet'">
<noscript><link rel="stylesheet" href="path/to/mystylesheet.css"></noscript>

Since not every browser supports preload, the loadCSS project provides a helpful polyfill for you to add after you’ve declared your links, like so:

<link rel="preload" href="path/to/mystylesheet.css" as="style" onload="this.rel='stylesheet'">
<noscript>
    <link rel="stylesheet" href="path/to/mystylesheet.css">
</noscript>
<script>
/*! loadCSS rel=preload polyfill. [c]2017 Filament Group, Inc. MIT License */
(function(){ ... }());
</script>

Network Priorities Out of Whack

I’ve never been super excited about this pattern. Preload is a bit of a blunt instrument—whatever you apply to it is gonna jump way up in line to be downloaded. The use of preload means that these stylesheets, which you’re presumably making asynchronous because they aren’t very critical to page display, are given a very high priority by browsers.

The following image from a WebPageTest run shows the issue pretty well. Lines 3-6 are CSS files that are being loaded asynchronously using the preload pattern. But, while developers have flagged them as not important enough to block rendering, the use of preload means they are arriving before the remaining resources.

Blocking the HTML parser

The network priority issues are enough of a reason to avoid this pattern in most situations. But in this case, the issues were compounded by the presence of another stylesheet being loaded externally.

<link rel="stylesheet" href="path/to/main.css" />
<link rel="preload" href="path/to/mystylesheet.css" as="style" onload="this.rel='stylesheet'">
<noscript>
    <link rel="stylesheet" href="path/to/mystylesheet.css">
</noscript>
<script>
/*! loadCSS rel=preload polyfill. [c]2017 Filament Group, Inc. MIT License */
(function(){ ... }());
</script>

You still have the same issues with the preload making these non-critical stylesheets have a high priority, but just as critically and perhaps a bit less obvious is the impact this has on the browsers ability to parse the page.

Again, Harry’s already wrote about what happens here in great detail, so I recommend reading through that to better understand what’s happening. But here’s the short version.

Typically, a stylesheet blocks the page from rendering. The browser has to request and parse it to be able to display the page. It does not, however, stop the browser from parsing the rest of the HTML.

Scripts, on the other hand, do block the parser unless they are marked as defer or async.

Since the browser has to assume that a script could potentially manipulate either the page itself or the styles that apply to the page, it has to be careful about when that script executes. If it knows that it’s still requesting some CSS, it will wait until that CSS has arrived before the script itself gets run. And, since it can’t continue parsing the document until the script has run, that means that stylesheet is no longer just blocking rendering—it’s preventing the browser from parsing the HTML.

This blocking behavior is true for external scripts, but also inline script elements. If CSS is still being downloaded, inline scripts won’t run until that CSS arrives.

Seeing the problem

The clearest way I’ve found to visualize this is to look at Chrome’s developer tools (gosh, I love how great our tools have gotten).

In Chrome, you can use the Performance panel to capture a profile of the page load. (I recommend using a throttled network setting to help make the issue even more apparent.)

For this test, I ran a test using a Fast 3G setting. Zooming in on the main thread activity, you can see that the request for the CSS file occurs during the first chunk of HTML parsing (around 1.7s into the page load process).

For the next second or so, the main thread goes quiet. There are some tiny bits of activity—load events firing on the preloaded stylesheets, more requests being sent by the browser’s preloader—but the browser has stopped parsing the HTML entirely.

Around 2.8s, the stylesheet arrives, and the browser parses it. Only then do we see the inline script get evaluated, followed by the browser finally moving on with parsing the HTML.

The Firefox Exception

This blocking behavior is true of Chrome, Edge, and Safari. The one exception of note is Firefox.

Every other browser pauses HTML parsing but uses a lookahead parser (preloader) to scan for external resources and make requests for them. Firefox, however, takes it one step further: they’ll speculatively build the DOM tree even though they’re waiting on script execution.

As long as the script doesn’t manipulate the DOM and cause them to throw that speculative parsing work out, it lets Firefox get a head start. Of course, if they do have to throw it out, then that speculative work accomplishes nothing.

It’s an interesting approach, and I’m super curious about how effective it is. Right now, however, there’s no visibility into this in Firefox’s performance profiler. You can’t see this parsing work in their profiler, whether that work had to be redone and, if so, what the performance cost was.

I chatted with the fine folks working on their developer tools, though, and they had some exciting ideas for how they might be able to surface that information in the future—fingers crossed!

Fixing the issue

In this client’s case, the first step to fixing this issue was pretty straightforward: ditch the preload/polyfill pattern. Preloading non-critical CSS kind of defeats the purpose and switching to using a print stylesheet instead of a preload, as Filament Group themselves now recommend, allows us to remove the polyfill entirely.

<link rel="stylesheet" href="/path/to/my.css" media="print" onload="this.media='all'">

That already puts us in a better state: the network priorities now line up much better with the actual importance of the assets being downloaded, and we’ve eliminated that inline script block.

In this case, there was still one more inline script in the head of the document after the CSS was requested. Moving that script ahead of the stylesheet in the DOM eliminated the parser blocking behavior. Looking at the Chrome Performance panel again, the difference is clear.

Whereas before it was stopped at line 1939 waiting for the CSS to load, it now parses through line 5281, where another inline script occurs at the end of the page, once again stopping the parser.

This is a quick fix, but it’s also not the one that will be the final solution. Switching the order and ditching the preload/polyfill pattern is just the first step. Our biggest gain here will come from inlining the critical CSS instead of referencing it in an external file (the preload/polyfill pattern is intended to be used alongside inline CSS). That lets us ignore the script related issues altogether and ensures that the browser has all the CSS it needs to render the page in that first network request.

For now, though, we can get a nice performance boost through a minor change to the way we load CSS and the DOM order.

Long story short:

  • If you’re using loadCSS with the preload/polyfill pattern, switch to the print stylesheet pattern instead.
  • If you have any external stylesheets that you’re loading normally (that is, as a regular stylesheet link), move any and all inline scripts that you can above it in the markup.
  • Inline your critical CSS for the fastest possible start render times.



Source link

Post image
Strategy

Creating a Workout graph with frontend tech : webdev


Mods please delete if this isn’t allowed. I’ve been looking everywhere to see how I could possibly make a workout graph for a site I am trying to create but have not been able to find anything that will work.

These workouts would be for running, cycling, swimming, etc and the intervals within them. I have tried the approach of a bar graph, but it seems like the width of the bars are almost always the same width. What might be the best strategy to create something like this within the browser? Angular or Vanilla JS, or a framework/package of some other type would be preferred but I am completely willing and ready to learn anything else to make something like this.

At this link is a pretty good example of what I’m trying to do where I can select workout intensity and duration and it is displayed in the browser. Does anyone have any ideas of how this is created or what I can work on to try and make something similar myself? (note: I am not at all trying to copy this site. This site is for building workouts for a virtual cycling platform. I am trying to make workouts for the sake of viewing the intensities over time , but this has a simple free creator to show.)

Here’s an image from a different workout creation site:

Post image

So far I poked around at Google Charts and ChartJS but it seems like neither of these will have the functionality of something like this. Going to a more traditional approach, I imagine this could be done with html canvas. Am I on the right track?



Source link

Instant GraphQL Backend with Fine-grained Security Using Fau...
Strategy

Instant GraphQL Backend with Fine-grained Security Using Fau…


GraphQL is becoming popular and developers are constantly looking for frameworks that make it easy to set up a fast, secure and scalable GraphQL API. In this article, we will learn how to create a scalable and fast GraphQL API with authentication and fine-grained data-access control (authorization). As an example, we’ll build an API with register and login functionality. The API will be about users and confidential files so we’ll define advanced authorization rules that specify whether a logged-in user can access certain files. 

By using FaunaDB’s native GraphQL and security layer, we receive all the necessary tools to set up such an API in minutes. FaunaDB has a free tier so you can easily follow along by creating an account at https://dashboard.fauna.com/. Since FaunaDB automatically provides the necessary indexes and translates each GraphQL query to one FaunaDB query, your API is also as fast as it can be (no n+1 problems!).

Setting up the API is simple: drop in a schema and we are ready to start. So let’s get started!  

We need an example use-case that demonstrates how security and GraphQL API features can work together. In this example, there are users and files. Some files can be accessed by all users, and some are only meant to be accessed by managers. The following GraphQL schema will define our model:

type User {
  username: String! @unique
  role: UserRole!
}

enum UserRole {
  MANAGER
  EMPLOYEE
}

type File {
  content: String!
  confidential: Boolean!
}

input CreateUserInput {
  username: String!
  password: String!
  role: UserRole!
}

input LoginUserInput {
  username: String!
  password: String!
}

type Query {
  allFiles: [File!]!
}

type Mutation {
  createUser(input: CreateUserInput): User! @resolver(name: "create_user")
  loginUser(input: LoginUserInput): String! @resolver(name: "login_user")
}

When looking at the schema, you might notice that the createUser and loginUser Mutation fields have been annotated with a special directive named @resolver. This is a directive provided by the FaunaDB GraphQL API, which allows us to define a custom behavior for a given Query or Mutation field. Since we’ll be using FaunaDB’s built-in authentication mechanisms, we will need to define this logic in FaunaDB after we import the schema. 

Importing the schema

First, let’s import the example schema into a new database. Log into the FaunaDB Cloud Console with your credentials. If you don’t have an account yet, you can sign up for free in a few seconds.

Once logged in, click the “New Database” button from the home page:

Choose a name for the new database, and click the “Save” button: 

Next, we will import the GraphQL schema listed above into the database we just created. To do so, create a file named schema.gql containing the schema definition. Then, select the GRAPHQL tab from the left sidebar, click the “Import Schema” button, and select the newly-created file: 

The import process creates all of the necessary database elements, including collections and indexes, for backing up all of the types defined in the schema. It automatically creates everything your GraphQL API needs to run efficiently. 

You now have a fully functional GraphQL API which you can start testing out in the GraphQL playground. But we do not have data yet. More specifically, we would like to create some users to start testing our GraphQL API. However, since users will be part of our authentication, they are special: they have credentials and can be impersonated. Let’s see how we can create some users with secure credentials!

Custom resolvers for authentication

Remember the createUser and loginUser mutation fields that have been annotated with a special directive named @resolver. createUser is exactly what we need to start creating users, however the schema did not really define how a user needs to created; instead, it was tagged with a @resolver tag.

By tagging a specific mutation with a custom resolver such as @resolver(name: "create_user") we are informing FaunaDB that this mutation is not implemented yet but will be implemented by a User-defined function (UDF). Since our GraphQL schema does not know how to express this, the import process will only create a function template which we still have to fill in.

A UDF is a custom FaunaDB function, similar to a stored procedure, that enables users to define a tailor-made operation in Fauna’s Query Language (FQL). This function is then used as the resolver of the annotated field. 

We will need a custom resolver since we will take advantage of the built-in authentication capabilities which can not be expressed in standard GraphQL. FaunaDB allows you to set a password on any database entity. This password can then be used to impersonate this database entity with the Login function which returns a token with certain permissions. The permissions that this token holds depend on the access rules that we will write.

Let’s continue to implement the UDF for the createUser field resolver so that we can create some test users. First, select the Shell tab from the left sidebar:

As explained before, a template UDF has already been created during the import process. When called, this template UDF prints an error message stating that it needs to be updated with a proper implementation. In order to update it with the intended behavior, we are going to use FQL’s Update function.

So, let’s copy the following FQL query into the web-based shell, and click the “Run Query” button:

Update(Function("create_user"), {
  "body": Query(
    Lambda(["input"],
      Create(Collection("User"), {
        data: {
          username: Select("username", Var("input")),
          role: Select("role", Var("input")),
        },
        credentials: {
          password: Select("password", Var("input"))
        }
      })  
    )
  )
});

Your screen should look similar to:

The create_user UDF will be in charge of properly creating a User document along with a password value. The password is stored in the document within a special object named credentials that is encrypted and cannot be retrieved back by any FQL function. As a result, the password is securely saved in the database making it impossible to read from either the FQL or the GraphQL APIs. The password will be used later for authenticating a User through a dedicated FQL function named Login, as explained next.

Now, let’s add the proper implementation for the UDF backing up the loginUser field resolver through the following FQL query:

Update(Function("login_user"), {
  "body": Query(
    Lambda(["input"],
      Select(
        "secret",
        Login(
          Match(Index("unique_User_username"), Select("username", Var("input"))), 
          { password: Select("password", Var("input")) }
        )
      )
    )
  )
});

Copy the query listed above and paste it into the shell’s command panel, and click the “Run Query” button:

The login_user UDF will attempt to authenticate a User with the given username and password credentials. As mentioned before, it does so via the Login function. The Login function verifies that the given password matches the one stored along with the User document being authenticated. Note that the password stored in the database is not output at any point during the login process. Finally, in case the credentials are valid, the login_user UDF returns an authorization token called a secret which can be used in subsequent requests for validating the User’s identity.

With the resolvers in place, we will continue with creating some sample data. This will let us try out our use case and help us better understand how the access rules are defined later on.

Creating sample data

First, we are going to create a manager user. Select the GraphQL tab from the left sidebar, copy the following mutation into the GraphQL Playground, and click the “Play” button:

mutation CreateManagerUser {
  createUser(input: {
    username: "bill.lumbergh"
    password: "123456"
    role: MANAGER
  }) {
    username
    role
  }
}

Your screen should look like this:

Next, let’s create an employee user by running the following mutation through the GraphQL Playground editor:

mutation CreateEmployeeUser {
  createUser(input: {
    username: "peter.gibbons"
    password: "abcdef"
    role: EMPLOYEE
  }) {
    username
    role
  }
}

You should see the following response:

Now, let’s create a confidential file by running the following mutation:

mutation CreateConfidentialFile {
  createFile(data: {
    content: "This is a confidential file!"
    confidential: true
  }) {
    content
    confidential
  }
}

As a response, you should get the following:

And lastly, create a public file with the following mutation:

mutation CreatePublicFile {
  createFile(data: {
    content: "This is a public file!"
    confidential: false
  }) {
    content
    confidential
  }
}

If successful, it should prompt the following response:

Now that all the sample data is in place, we need access rules since this article is about securing a GraphQL API. The access rules determine how the sample data we just created can be accessed, since by default a user can only access his own user entity. In this case, we are going to implement the following access rules: 

  1. Allow employee users to read public files only.
  2. Allow manager users to read both public files and, only during weekdays, confidential files.

As you might have already noticed, these access rules are highly specific. We will see however that the ABAC system is powerful enough to express very complex rules without getting in the way of the design of your GraphQL API.

Such access rules are not part of the GraphQL specification so we will define the access rules in the Fauna Query Language (FQL), and then verify that they are working as expected by executing some queries from the GraphQL API. 

But what is this “ABAC” system that we just mentioned? What does it stand for, and what can it do?

What is ABAC?

ABAC stands for Attribute-Based Access Control. As its name indicates, it’s an authorization model that establishes access policies based on attributes. In simple words, it means that you can write security rules that involve any of the attributes of your data. If our data contains users we could use the role, department, and clearance level to grant or refuse access to specific data. Or we could use the current time, day of the week, or location of the user to decide whether he can access a specific resource. 

In essence, ABAC allows the definition of fine-grained access control policies based on environmental properties and your data. Now that we know what it can do, let’s define some access rules to give you concrete examples.

Defining the access rules

In FaunaDB, access rules are defined in the form of roles. A role consists of the following data:

  • name —  the name that identifies the role
  • privileges — specific actions that can be executed on specific resources 
  • membership — specific identities that should have the specified privileges

Roles are created through the CreateRole FQL function, as shown in the following example snippet:

CreateRole({
  name: "role_name",
  membership: [     // ...   ],
  privileges: [     // ...   ]
})

You can see two important concepts in this role; membership and privileges. Membership defines who receives the privileges of the role and privileges defines what these permissions are. Let’s write a simple example rule to start with: “Any user can read all files.”

Since the rule applies on all users, we would define the membership like this: 

membership: {
  resource: Collection("User")
}

Simple right? We then continue to define the “Can read all files” privilege for all of these users.

privileges: [
  {
    resource: Collection("File"),
    actions: { read: true }
  }
]

The direct effect of this is that any token that you receive by logging in with a user via our loginUser GraphQL mutation can now access all files. 

This is the simplest rule that we can write, but in our example we want to limit access to some confidential files. To do that, we can replace the {read: true} syntax with a function. Since we have defined that the resource of the privilege is the “File” collection, this function will take each file that would be accessed by a query as the first parameter. You can then write rules such as: “A user can only access a file if it is not confidential”. In FaunaDB’s FQL, such a function is written by using Query(Lambda(‘x’, … <logic that users Var(‘x’)>)).

Below is the privilege that would only provide read access to non-confidential files: 

privileges: [
  {
    resource: Collection("File"),
    actions: {
      // Read and establish rule based on action attribute
      read: Query(
        // Read and establish rule based on resource attribute
        Lambda("fileRef",
          Not(Select(["data", "confidential"], Get(Var("fileRef"))))
        )
      )
    }
  }
]

This directly uses properties of the “File” resource we are trying to access. Since it’s just a function, we could also take into account environmental properties like the current time. For example, let’s write a rule that only allows access on weekdays. 

privileges: [
    {
      resource: Collection("File"),
      actions: {
        read: Query(
          Lambda("fileRef",
            Let(
              {
                dayOfWeek: DayOfWeek(Now())
              },
              And(GTE(Var("dayOfWeek"), 1), LTE(Var("dayOfWeek"), 5))  
            )
          )
        )
      }
    }
]

As mentioned in our rules, confidential files should only be accessible by managers. Managers are also users, so we need a rule that applies to a specific segment of our collection of users. Luckily, we can also define the membership as a function; for example, the following Lambda only considers users who have the MANAGER role to be part of the role membership. 

membership: {
  resource: Collection("User"),
  predicate: Query(    // Read and establish rule based on user attribute
    Lambda("userRef", 
      Equals(Select(["data", "role"], Get(Var("userRef"))), "MANAGER")
    )
  )
}

In sum, FaunaDB roles are very flexible entities that allow defining access rules based on all of the system elements’ attributes, with different levels of granularity. The place where the rules are defined — privileges or membership — determines their granularity and the attributes that are available, and will differ with each particular use case.

Now that we have covered the basics of how roles work, let’s continue by creating the access rules for our example use case!

In order to keep things neat and tidy, we’re going to create two roles: one for each of the access rules. This will allow us to extend the roles with further rules in an organized way if required later. Nonetheless, be aware that all of the rules could also have been defined together within just one role if needed.

Let’s implement the first rule: 

“Allow employee users to read public files only.”

In order to create a role meeting these conditions, we are going to use the following query:

CreateRole({
  name: "employee_role",
  membership: {
    resource: Collection("User"),
    predicate: Query( 
      Lambda("userRef",
        // User attribute based rule:
        // It grants access only if the User has EMPLOYEE role.
        // If so, further rules specified in the privileges
        // section are applied next.        
        Equals(Select(["data", "role"], Get(Var("userRef"))), "EMPLOYEE")
      )
    )
  },
  privileges: [
    {
      // Note: 'allFiles' Index is used to retrieve the 
      // documents from the File collection. Therefore, 
      // read access to the Index is required here as well.
      resource: Index("allFiles"),
      actions: { read: true } 
    },
    {
      resource: Collection("File"),
      actions: {
        // Action attribute based rule:
        // It grants read access to the File collection.
        read: Query(
          Lambda("fileRef",
            Let(
              {
                file: Get(Var("fileRef")),
              },
              // Resource attribute based rule:
              // It grants access to public files only.
              Not(Select(["data", "confidential"], Var("file")))
            )
          )
        )
      }
    }
  ]
})

Select the Shell tab from the left sidebar, copy the above query into the command panel, and click the “Run Query” button:

Next, let’s implement the second access rule:

“Allow manager users to read both public files and, only during weekdays, confidential files.”

In this case, we are going to use the following query:

CreateRole({
  name: "manager_role",
  membership: {
    resource: Collection("User"),
    predicate: Query(
      Lambda("userRef", 
        // User attribute based rule:
        // It grants access only if the User has MANAGER role.
        // If so, further rules specified in the privileges
        // section are applied next.
        Equals(Select(["data", "role"], Get(Var("userRef"))), "MANAGER")
      )
    )
  },
  privileges: [
    {
      // Note: 'allFiles' Index is used to retrieve
      // documents from the File collection. Therefore, 
      // read access to the Index is required here as well.
      resource: Index("allFiles"),
      actions: { read: true } 
    },
    {
      resource: Collection("File"),
      actions: {
        // Action attribute based rule:
        // It grants read access to the File collection.
        read: Query(
          Lambda("fileRef",
            Let(
              {
                file: Get(Var("fileRef")),
                dayOfWeek: DayOfWeek(Now())
              },
              Or(
                // Resource attribute based rule:
                // It grants access to public files.
                Not(Select(["data", "confidential"], Var("file"))),
                // Resource and environmental attribute based rule:
                // It grants access to confidential files only on weekdays.
                And(
                  Select(["data", "confidential"], Var("file")),
                  And(GTE(Var("dayOfWeek"), 1), LTE(Var("dayOfWeek"), 5))  
                )
              )
            )
          )
        )
      }
    }
  ]
})

Copy the query into the command panel, and click the “Run Query” button:

At this point, we have created all of the necessary elements for implementing and trying out our example use case! Let’s continue with verifying that the access rules we just created are working as expected…

Putting everything in action

Let’s start by checking the first rule: 

“Allow employee users to read public files only.”

The first thing we need to do is log in as an employee user so that we can verify which files can be read on its behalf. In order to do so, execute the following mutation from the GraphQL Playground console:

mutation LoginEmployeeUser {
  loginUser(input: {
    username: "peter.gibbons"
    password: "abcdef"
  })
}

As a response, you should get a secret access token. The secret represents that the user has been authenticated successfully:

At this point, it’s important to remember that the access rules we defined earlier are not directly associated with the secret that is generated as a result of the login process. Unlike other authorization models, the secret token itself does not contain any authorization information on its own, but it’s just an authentication representation of a given document.

As explained before, access rules are stored in roles, and roles are associated with documents through their membership configuration. After authentication, the secret token can be used in subsequent requests to prove the caller’s identity and determine which roles are associated with it. This means that access rules are effectively verified in every subsequent request and not only during authentication. This model enables us to modify access rules dynamically without requiring users to authenticate again.

Now, we will use the secret issued in the previous step to validate the identity of the caller in our next query. In order to do so, we need to include the secret as a Bearer Token as part of the request. To achieve this, we have to modify the Authorization header value set by the GraphQL Playground. Since we don’t want to miss the admin secret that is being used as default, we’re going to do this in a new tab.

Click the plus (+) button to create a new tab, and select the HTTP HEADERS panel on the bottom left corner of the GraphQL Playground editor. Then, modify the value of the Authorization header to include the secret obtained earlier, as shown in the following example. Make sure to change the scheme value from Basic to Bearer as well:

{
  "authorization": "Bearer fnEDdByZ5JACFANyg5uLcAISAtUY6TKlIIb2JnZhkjU-SWEaino"
}

With the secret properly set in the request, let’s try to read all of the files on behalf of the employee user. Run the following query from the GraphQL Playground: 

query ReadFiles {
  allFiles {
    data {
      content
      confidential
    }
  }
}

In the response, you should see the public file only:

Since the role we defined for employee users does not allow them to read confidential files, they have been correctly filtered out from the response!

Let’s move on now to verifying our second rule:

“Allow manager users to read both public files and, only during weekdays, confidential files.”

This time, we are going to log in as the employee user. Since the login mutation requires an admin secret token, we have to go back first to the original tab containing the default authorization configuration. Once there, run the following query:

mutation LoginManagerUser {
  loginUser(input: {
    username: "bill.lumbergh"
    password: "123456"
  })
}

You should get a new secret as a response:

Copy the secret, create a new tab, and modify the Authorization header to include the secret as a Bearer Token as we did before. Then, run the following query in order to read all of the files on behalf of the manager user:

query ReadFiles {
  allFiles {
    data {
      content
      confidential
    }
  }
}

As long as you’re running this query on a weekday (if not, feel free to update this rule to include weekends), you should be getting both the public and the confidential file in the response:

And, finally, we have verified that all of the access rules are working successfully from the GraphQL API!

Conclusion

In this post, we have learned how a comprehensive authorization model can be implemented on top of the FaunaDB GraphQL API using FaunaDB’s built-in ABAC features. We have also reviewed ABAC’s distinctive capabilities, which allow defining complex access rules based on the attributes of each system component.

While access rules can only be defined through the FQL API at the moment, they are effectively verified for every request executed against the FaunaDB GraphQL API. Providing support for specifying access rules as part of the GraphQL schema definition is already planned for the future.

In short, FaunaDB provides a powerful mechanism for defining complex access rules on top of the GraphQL API covering most common use cases without the need for third-party services.



Source link