Screenshot of a couple of lines from the file.
Strategy

Whatz the Good Word: PWA With Flask


Introduction

Though I had not planned it initially, I purchased a domain name for my cloud server on which run the two Rails applications described in my previous DZone articles (first and second).

Some months ago, I had written a word game as a Flask web application. The code was inside my laptop, and I decided to put it out in the public domain. However, to give it a mobile app feel, I decided to make it a Progressive Web Application (PWA).

The first requirement for a PWA is that it should be served over HTTPS. My cloud server runs on Ubuntu which has snap pre-loaded, so the easiest way to switch to HTTPS is to run snap to install the utility certbot and run certbot for installing Lets Encrypt certificates and configuring Nginx. But Lets Encrypt certificates can be installed for domain names only, not for IP addresses. I did a quick search for domain names, and the cheapest one available was mahboob.xyz, which was perfect for hosting my side projects, so I bought it.

The commands I ran as root for enabling HTTPS are as follows:

For the IP address to domain name mapping, all I had to do was go into my domain registrar account and give the named servers of my hosting provider. In the hosting account control panel, I went to the Domains section and in just very few clicks the mapping was done. 

PWA

As John Price explained in his article, How to Turn Your Website into a PWA, the advantages of a PWA are:

  • Offline capable
  • Installable
  • Improved performance
  • App-like experience
  • Push notifications
  • Discoverable
  • Linkable
  • Always up to date
  • Safe
  • Cost-effective

The real kicker is the last point. You don’t have to invest time and money to write any mobile application code, whether native or cross-platform. Whatever you have used for your responsive web application is good to go; with minimal changes, it gets the features and feel of a mobile application.

My application is a simple word game. The user gets a clue and they have to guess the word. On a button click, they will be told whether they got the answer right or not. If the user wants to know the answer they will be shown the answer. If the user wants a different word, they will get a new clue for it.

The clues and answers are stored in a pipe-delimited flat file on the server. A couple of lines from the file are shown in the following screenshot.

Screenshot of a couple of lines from the file.

When the Flask application starts, it reads the file contents into a data array. The root action is an index, which reads a random element from the array, splits it on the pipe character, and sends the first part (the clue) and the element’s index to the game page. The clue is displayed as text and the index is kept as a hidden variable.

Clue display.

Buttons Functionality

  • Check: It invokes the action method check, sending the index and the answer the user entered in the text field. The action retrieves the element from the data array using the index, splits it into pipe character and checks the second part (correct answer) with the user’s answer. If they are the same, the method returns “You got it right!”, else it returns “Wrong Answer! Please try Again!!”. If the answer is correct, this button itself and the “Show Answer” button are hidden.
  • Show Answer: This button invokes the action show, sending the index. The action method retrieves the element from the data array using the index, splits it into pipe character, and sends the correct answer back to the game page. After receiving the response from the server, the game page hides the input text field, Check and Show Answer buttons.
  • New Word Clue: The functionality for this button is the same as the index. It invokes the action “new”, which reads a random element from the array, splits it on the pipe character, and sends the first part (the clue) and the element’s index to the page. The answer text field is cleared and the Check and Show Answer buttons are explicitly unhidden by calling the show method.

All the buttons make AJAX GET calls via JQuery.

PWA Requirements

In order to convert a web application to a PWA, there are three main requirements.

  1. Run it over HTTPS.
  2. Create and serve a manifest file in JSON format.
  3. Create and serve a JavaScript file to be registered as a service worker file.

My service worker JavaScript file is called serviceworker.js. The game page registers it with the following code:

I used the online Web App Manifest Generator to create manifest.json. The manifest.json and serviceworker.js files are placed in the static folder, from which Flask serves public assets without requiring server-side routing. 

The serviceworker.js file has event listeners for installing itself, opening the cache, activating the cache, and adding/fetching URLs and responses to/from the cache. It also handles two custom features in the fetch event handler.

  1. Getting a new word clue should not be cached. If it’s cached, the same clue will be shown again and again from the cache. Preventing this is achieved by a check for the URL “https://mahboob.xyz/wtgw/new” and if yes, the code returns from the function, thus giving a pass-through to the server without checking the cache.
  2. If the user event has not been cached and the user is not connected to the internet, then the call to the cache returns with the response “You seem to be offline, please try after you’re online.”

The fetch event handler code is shown below: /static/serviceworker.js

Deployment

On mahboob.xyz, the application is run by gunicorn, which requires the file wsgi.py, and is set up as a service. It is co-located along with two Rails applications. The application service and Nginx configuration are given below: /etc/systemd/system/wtgw.service 

/etc/nginx/sites-enabled/mh_sites 

You can grab the code from the Github repository and play the game here. Have fun, and sharpen your vocabulary.



Source link

Design v18 | CSS-Tricks
Strategy

Design v18 | CSS-Tricks


I redesigned the site! I can never think about the word redesign without also thinking about realigning, from Cameron Moll’s seminal article. I did not start from nothing. This design wasn’t a blank design canvas and empty code editor thing. I doubt any future redesign will be either. I started with what we already had and pushed some things around. But I pushed so much around, touching almost every single file, that it’s worthy of drawing a line and saying this is v18.

I keep a very incomplete design history here.

Getting Started

I always tend to start by poking around in a design tool. After 3 or 4 passes in Figma (then coming back after I started building to flesh out the footer design), this is where I left off.

Once I’m relatively happy with what is happening visually, I jump ship and start coding, making all the final decisions there. The final product isn’t 1000 miles different than this, but it has quite a few differences (and required 10× more decisions).

Simplicity

It may not look like it at first glance, but to me as I worked on it, the core theme was simplification. Not drastic, just like, 20%.

The header in v17 had a special mobile version and dealt with open/closed state. The v18 header is just a handful of links that fall down to the next line on small screens. I tossed in a “back to top” link in the footer that shows up once you’ve scrolled away from the top to help get you back to the nav. That scroll detection (IntersectionObserver based) is what I use to “spin the star” on the way back up also.

I can already tell that the site header will be one of the things that evolves significantly in v18 as there is more polish to be found there.

The search form in v17 also had open/closed states, and special templates for the results page. I’m all-in on Jetpack Search now, so I do nothing but open that when you click the search icon.

This search is JavaScript-powered, so to make it more resiliant, it’s also a valid hyperlink to Google search results:

<a 
  href="https://www.google.com/search?q=site:css-tricks.com%20layout"
  class="jetpack-search-filter__link"
>
  <span class="screen-reader-text">Search</span>
  <svg> ... </svg>
</a>

There were a variety of different layouts in v17 (e.g. sidebar on the left or right) and header styles (e.g. video in the header) before. Now there is largely just one of both.

The footer in v17 became quite sprawling, with whole sections for the newsletter form, social media, related sites, and more. I’ve compacted it all into a more traditional footer, if there is such a thing.

There is one look for “cards” now, whether that is an article, video, guide, etc. There are slight variations depending on if the author is relevant, if it has tags, a call-to-action, etc, but it’s all the same base (and template). The main variation is a “mini” card, which is now used mostly-consistently across popular articles, the monthly mixup, and in-article related-article cards.

The newsletter area is simplified quite a bit. In v17, the /newsletters/ URL was kind of a “landing page” for the newsletter, and you could view the latest in a sidebar.

Now that URL just redirects you to the latest newsletter so you can read it like any other content easily, as well as navigate to past issues.

WordPress has the concept of one featured image per article. You don’t have to use it, but we do. I like how it’s integrated naturally into other things. Like it becomes the image for social media integration automatically. We used it in v17 as a subtle background-image thing.

Maybe in a perfect world, a perfect site would have a perfect content strategy such that every single article has a perfect featured image. A matching color scheme, exact dimensions, very predictable. But this is no perfect world. I prefer systems that allow for sloppiness. The design around our featured images accepts just about anything.

  • A site-branded gradient is laid over top and mix-blend-mode‘d onto it, making them all feel related.
  • The exception is that they will be sized/cropped as needed.

With that known, our featured images are used in lots of contexts:

Large, featured article on the homepage
If vertical space is limited (height @media query), the featured image height is reduced.
Article headers use a very faded/enlarged version as part of a layered background

CSS Stats

Looking only at the CSS between the two versions (Project Wallace helps here):

Minified and Gzipped the main stylesheet is 16.4 kB. Perhaps not as small as an all-utility stylesheet could be, but that’s not a size I’ll every worry about, especially since without really trying to the size heavily trended downward.

Not Exactly a Speed Demon

There are quite a few resources in use on CSS-Tricks. If speed was my #1 priority, the first thing I’d do is start chopping away at the resources in use. In my opinion, it would make the site far less fun, but probably wouldn’t harm the content all that much. I just don’t want to. I’d rather find ways to keep the site relatively fast while still keeping it visually rich. Maybe down the road I can explore some of this stuff to allow for a much lighter-weight version of the site that is opt-in in a standards-based way.

About those resources…

  • Images are the biggest weight. Almost every page has quite a few of them (10+). I try to serve them from a CDN in an optimized format sized with the responsive images syntax. There is more I can do, but I’ve got a good start already.
  • There is still ~180 kB of JavaScript. The Jetpack Search feature is powered by it, which is the weightiest module. A polyfill gets loaded (probably by that), which I should look into seeing if could be removed. I’m still using jQuery, which I’ll definitely look into removing in the next round. Nothing against jQuery, I’m just not using it all that much. Most of what I’m doing is written vanilla JavaScript anyway. Google Analytics is in there, and then rest is little baby scripts (ironically) for performance things or advertising.
  • The fonts weigh in at ~163 kB and they aren’t loaded in any particularly fancy way.

All three of those things are targets for speed improvements.

And yet, hey, the Desktop Lighthouse report ain’t bad:

Those results are from the homepage, which because of the big grids of content, is one of the heavier pages. There’s still plenty of attempts at performance best practices here:

  • Everything is served from global http/2 CDN’s and cached
  • Assets optimized/minified/combined where possible
  • Assets/ads lazy-loaded where possible
  • Premium hosting
  • HTML over the wire + instant.page

My hope is that as you click around the site and come back in subsequent visits, it feels pretty snappy.

Type

It’s Hoefler&Co. across the board again.

I left the bulk of the article typography alone, as that was one of the last design sprints I did in v17 and I kinda like where it left off. Now that clamp() is here though, I’m using that to do fluid typography for much of the site. For example, headers:

font-size: clamp(2rem, calc(2rem + 1.2vw), 3rem);

aXe

I used the axe DevTools plugin to test pages before launch, and did find a handful of things to get fixed up. Not exactly a deep dive into accessibility, but also, this wasn’t a full re-write, so I don’t expect terribly much has changed in terms of accessibility. I’m particularly interested in fixing any problems here, so don’t hold back on me!

Bugs

I’m sure there are some. I’d rather not use this comment thread for bugs. If you’ve run across one, please hit us at [email protected]. 🧡



Source link

Developer Frustrated with Tech Debt"
Strategy

6 Things to Do When Inheriting Legacy Rails Apps


Introduction

Rails version 1.0 is approaching its 15 year anniversary, and there’s reason to celebrate the framework’s progress.

There have been hundreds of amazing products built with Rails since its creation.

Streaming service heavyweight Hulu, project management leader Basecamp, and hospitality service AirBnB were all built using Rails.

But for every successful project, there’s dozens if not hundreds of others that are overloaded with technical debt, difficult to manage, or a cumbersome nightmare to improve.

Full-stack engineering companies are often tasked with taking over these kinds of legacy Rails apps that have been abandoned by their previous development teams.

Here’s a step-by-step guide that we use when starting one of these projects. We hope this can be a roadmap for successfully taking ownership of a legacy Rails project.

1. Code Review and Local Setup

Documentation Overview

When starting with one of these abandoned Rails projects, the first thing we do is to try and understand the state of the app, namely the code complexity and the level of technical debt.

If you’re unfamiliar with the term “technical debt,” it is the result of development teams taking shortcuts to expedite the delivery of a feature or a project. If you’ve ever blown through something “just to get it out the door” and felt somewhat uncomfortable with it, you’ve felt the pang of “technical debt”.

Developer Frustrated with Tech Debt"

Every project inevitably takes on some amount of technical debt during its lifetime, so taking stock of code complexity and gauging the level of technical debt is a critical first step.

First, scan high and low for any written documentation about the project, whether it be a README, a wiki attached to the git repo, or some other documentation for the project.

Next, check the Gemfile and Gemfile.lock for the number of dependencies and Rails/Ruby versions being used. While doing this, be on the lookout for any gems you aren’t familiar with and make sure to research what they do. We usually write a quick note next to every gem to avoid having to visit rubygems constantly for reminders.

Test Review

After reading any available documentation, the best starting point for a code review is by looking at the tests in the spec/ or test/ folder.

Tests are the backbone for any Rails project and they are even more important when upgrading or refactoring a project. If the project has no tests or if it has low test coverage, then that’ll be the first place we start writing code after finishing the code review. (We’ll cover getting to 100% test coverage later in this post.)

Review Routes and Database Structure

Next, check the config/routes.rb file and the db/schema.rb file.

These files give us an idea of how the data is organized and how the app flows. We make a note inside the schema definition of any “god objects” (i.e. database tables with ~30 or more columns) that could be targets to be divided up down the road.

In the routes file, we make notes of any controllers that have large numbers of custom actions outside of the standard REST actions.

Another aspect of the routes we note is the number of only: or except: modifiers on resources definitions. If we don’t see only or except anywhere in the routes file, that most likely means we’re dealing with lots of unused routes that we’ll want to prune.

Review Remaining Folders

After looking in these three areas, we quickly scan the files in the app folder.

If the app has any complexity to it at all, it will be unrealistic to develop a full mental model of the project at this point, so we’re not going to do a line-for-line review of every controller, model, or view.

Instead, we’re looking at general code quality, making notes of any giant files of 200 lines or longer, and any other red flags such as active record callbacks with lots of implicit behavior, no strong parameters inside controllers, etc.

There are many automated tools that can help with this, and we use the Rails best practices gem often to give us an idea of the overall code quality.

2. Write Tests Up to (Ideally) 100% Test Coverage

At this point, you should have a decent understanding of the project. You probably won’t have a complete mental picture of the project but you’ve made some progress, so we’ll circle back to the tests and the test coverage.

In our experience, a high quality test suite is extremely rare in the realm of inherited projects. In fact, the norm for inheriting projects will most likely involve situations that have no tests at all or at the very best, test coverage between 10% and 20%.

When given a low quality test suite, the first thing to do is just start adding as many tests as possible to get as close to 100% test coverage as you can. This is obviously easier said than done, and we often spend several weeks at this stage; however, without an adequate test suite making any code changes later on will be incredibly dangerous. We don’t try to get fancy with our test suite, we write simple unit tests for model and service object methods, add integration tests for every controller route, and typically 5-10 feature tests of the app’s core functionality to make sure all the pieces work together.

There’s a lot of discussion in the Rails community if 100% test coverage should be a target for a test suite. For this kind of a special case, we do believe that 100% coverage is important because it helps with catching many small edge case bugs when refactoring down the road.

If you need assistance with visualizing test coverage, look into the simplecov gem which creates a nice HTML table to see what code paths your tests are taking.

3. Review Deployment and Setup a Staging Server

With the code reviewed and a reliable test suite backing us up, the last thing we review is the deployment process.

In the best case, the README or the client has a step-by-step guide for how to deploy to production and to staging.

Realistically, we don’t live in this world. You’ll rarely encounter intensely documented README files, so the deployment practices you encounter will probably be unclear in most scenarios.

Or there’s nothing and you have no idea. That happens.

Mapping out the deployment process in cases without a lot of direction mostly involves relying on your experience and a healthy dose of guess work. As you record each deployment step, make notes on possible errors that could happen and develop a strategy of how you would revert the step or handle the error. It’ll take a little work.

Want to get some help mapping out the deployment process and get some small amount of reassurance for future production deployment?

Do this: Spin up a free Heroku dyno and push the project up there to check for errors and how the app performs.

4. Lint the Codebase With Rubocop and Prettier

With the review phase done, we can now start moving into code changes.

Start off low stakes with some automated linting tools that should cause no behavioral changes to the code, specifically rubocop and prettier. Even with these popular libraries, there’s always a chance of bugs which is why we only start making these changes when we’re confident in our test coverage.

While there’s some variations, our standard linting process is three steps.

We start by running rubocop --auto-correct which automatically corrects any safe formatting warnings.

Then we use the prettier gem and run rbprettier --write '**/*.rb' to fix other remaining formatting issues, specifically any line length warnings.

Finally, we run rubocop --auto-gen-config which will create a rubocop_todo file to record all remaining rubocop warnings which we can begin fixing right away or later on.

Having this rubocop_todo.yml file will make sure we’re not introducing new formatting errors when we make other changes in later steps.

As a side note, I’m a big fan of using the git blame command when trying to understand what’s happening with specific code sections, and having large formatting changes like the ones generated by rubocop and prettier can make git blame less useful.

However, git has an ignore-rev flag for ignoring exactly these kinds of commits, and we recommend adding formatting commits to a .git-blame-ignore-revs file and following a guide like this to automatically ignore these commits.

5. Deploy to Staging and Production

As we noted above, the formatting changes we made in the previous section should (theoretically) not affect behavior at all, so we can take this opportunity to test our deployment process.

The first production deploy is by far the most dangerous step of the entire process, but we eventually have to take it, so we try and get it out of the way now.

If the project has no staging environment, now’s the time to set one up. It’s important to have a staging environment that mirrors how the production environment is deployed.

For example, if the project uses capistrano for production deploys, spin up a new AWS server and test a capistrano deploy to that endpoint. You should always test reverting for different possible errors on this staging server and test to make sure the website itself is working correctly after it’s launched.

At the end of the day, there’s only so much to prepare for your first push to production, so always aim for a low traffic time, and then pull the trigger with all hands on deck in case of emergency.

6. Upgrade Rails, Ruby, and Gem Versions

Once our first deployment to production has gone smoothly, we begin with the (potentially) breaking changes.

This can take different forms depending on the project. In some cases, you’ll have product owners that will ask for upgrades to a specific Rails version or Ruby version. In other cases, clients leave it up to your best judgement.

Upgrading both Ruby and Rails versions are entire blog posts in and of themselves, but to offer a bit of direction, just take it one minor version at a time and rely on your tests for guidance if something breaks.

Since Rails 5.0, there have been significantly less breaking changes, but upgrading even a minor version between Rails 3.0 and Rails 5.0 can require major code changes, so slow and steady is the key to avoid introducing new bugs.

More Refactoring or New Feature Development

With our target Rails, Ruby, and gem versions safely deployed to production, you’ll have finished the foundational work. Well done.

Depending on your client, this may be the end of the project — all that’s required is an upgrade to their gem versions. Most of the time though, there’s a list of features to go along with the changes.

With these six steps finished, you’re now in a much better position to start this feature development. Happy development!



Source link

Live demo (best viewed in Chrome and Edge)
Strategy

Simulating Drop Shadows with the CSS Paint API


Ask a hundred front-end developers, and most, if not all, of them will have used the box-shadow property in their careers. Shadows are enduringly popular, and can add an elegant, subtle effect if used properly. But shadows occupy a strange place in the CSS box model. They have no effect on an element’s width and height, and are readily clipped if overflow on a parent (or grandparent) element is hidden.

We can work around this with standard CSS in a few different ways. But, now that some of the CSS Houdini specifications are being implemented in browsers, there are tantalizing new options. The CSS Paint API, for example, allows developers to generate images programmatically at run time. Let’s look at how we can use this to paint a complex shadow within a border image.

A quick primer on Houdini

You may have heard of some newfangled CSS tech hitting the platform with the catchy name of Houdini. Houdini promises to deliver greater access to how the browser paints the page. As MDN states, it is “a set of low-level APIs that exposes parts of the CSS engine, giving developers the power to extend CSS by hooking into the styling and layout process of a browser’s rendering engine.”

The CSS Paint API

The CSS Paint API is one of the first of these APIs to hit browsers. It is a W3C candidate recommendation. This is the stage when specifications start to see implementation. It is currently available for general use in Chrome and Edge, while Safari has it behind a flag and Firefox lists it as “worth prototyping”. There is a polyfill available for unsupported browsers, though it will not run in IE11.

While the CSS Paint API is enabled in Chromium, passing arguments to the paint() function is still behind a flag. You’ll need to enable experimental web platform features for the time being. These examples may not, unfortunately, work in your browser of choice at the moment. Consider them an example of things to come, and not yet ready for production.

The approach

We’re going to generate an image with a shadow, and then use it for a border-image… huh? Well, let’s take a deeper look.

As mentioned above, shadows don’t add any width or height to an element, but spread out from its bounding box. In most cases, this isn’t a problem, but those shadows are vulnerable to clipping. A common workaround is to create some sort of offset with either padding or margin.

What we’re going to do is build the shadow right into the element by painting it in to the border-image area. This has a few key advantages:

  1. border-width adds to the overall element width
  2. Content won’t spill into the border area and overlap the shadow
  3. Padding won’t need any extra width to accommodate the shadow and content
  4. Margins around the element won’t interfere with that element’s siblings

For that aforementioned group of one hundred developers who’ve used box-shadow, it’s likely only a few of them have used border-image. It’s a funky property. Essentially, it takes an image and slices it into nine pieces, then places them in the four corners, sides and (optionally) the center. You can read more about how all this works in Nora Brown’s article.

The CSS Paint API will handle the heavy lifting of generating the image. We’re going to create a module for it that tells it how to layer a series of shadows on top of each other. That image will then get used by border-image.

These are the steps we’ll take:

  1. Set up the HTML and CSS for the element we want to paint in
  2. Create a module that draws the image
  3. Load the module into a paint worklet
  4. Call the worklet in CSS with the new paint() function

Setting up the canvas

You’re going to hear the term canvas a few times here, and in other CSS Paint API resources. If that term sounds familiar, you’re right. The API works in a similar way to the HTML <canvas> element.

First, we have to set up the canvas on which the API will paint. This area will have the same dimensions as the element that calls the paint function. Let’s make a 300×300 div.

<section>
  <div class="foo"></div>
</section>

And the styles:

.foo {
  border: 15px solid #efefef;
  box-sizing: border-box;
  height: 300px;
  width: 300px;
}

Creating the paint class

HTTPS is required for any JavaScript worklet, including paint worklets. You won’t be able to use it at all if you’re serving your content over HTTP.

The second step is to create the module that is loaded into the worklet — a simple file with the registerPaint() function. This function takes two arguments: the name of the worklet and a class that has the painting logic. To stay tidy, we’ll use an anonymous class.

registerPaint(
  "shadow",
  class {}
);

In our case, the class needs two attributes, inputProperties and inputArguments, and a method, paint().

registerPaint(
  "shadow",
  class {
    static get inputProperties() {
      return [];
    }
    static get inputArguments() {
      return [];
    }
    paint(context, size, props, args) {}
  }
);

inputProperties and inputArguments are optional, but necessary to pass data into the class.

Adding input properties

We need to tell the worklet which CSS properties to pull from the target element with inputProperties. It’s a getter that returns an array of strings.

In this array, we list both the custom and standard properties the class needs: --shadow-colors, background-color, and border-top-width. Pay particular attention to how we use non-shorthand properties.

static get inputProperties() {
  return ["--shadow-colors", "background-color", "border-top-width"];
}

For simplicity, we’re assuming here that the border is even on all sides.

Adding arguments

Currently, inputArguments are still behind a flag, hence enabling experimental features. Without them, use inputProperties and custom properties instead.

We also pass arguments to the paint module with inputArguments. At first glance, they may seem superfluous to inputProperties, but there are subtle differences in how the two are used.

When the paint function is called in the stylesheet, inputArguments are explicitly passed in the paint() call. This gives them an advantage over inputProperties, which might be listening for properties that could be modified by other scripts or styles. For example, if you’re using a custom property set on :root that changes, it may filter down and affect the output.

The second important difference for inputArguments, which is not intuitive, is that they are not named. Instead, they are referenced as items in an array within the paint method. When we tell inputArguments what it’s receiving, we are actually giving it the type of the argument.

The shadow class is going to need three arguments: one for X positions, one for Y positions, and one for blurs. We’ll set that up as three space-separated lists of integers.

Anyone who has registered a custom property may recognize the syntax. In our case, the <integer> keyword means any whole number, while + denotes a space-separated list.

static get inputArguments() {
  return ["<integer>+", "<integer>+", "<integer>+"];
}

To use inputProperties in place of inputArguments, you could set custom properties directly on the element and listen for them. Namespacing would be critical to ensure inherited custom properties from elsewhere don’t leak in.

Adding the paint method

Now that we have the inputs, it’s time to set up the paint method.

A key concept for paint() is the context object. It is similar to, and works much like, the HTML <canvas> element context, albeit with a few small differences. Currently, you cannot read pixels back from the canvas (for security reasons), or render text (there’s a brief explanation why in this GitHub thread).

The paint() method has four implicit parameters:

  1. The context object
  2. Geometry (an object with width and height)
  3. Properties (a map from inputProperties)
  4. Arguments (the arguments passed from the stylesheet)
paint(ctx, geom, props, args) {}

Getting the dimensions

The geometry object knows how big the element is, but we need to adjust for the 30 pixels of total border on the X and Y axis:

const width = (geom.width - borderWidth * 2);
const height = (geom.height - borderWidth * 2);

Using properties and arguments

Properties and arguments hold the resolved data from inputProperties and inputArguments. Properties come in as a map-like object, and we can pull values out with get() and getAll():

const borderWidth = props.get("border-top-width").value;
const shadowColors = props.getAll("--shadow-colors");

get() returns a single value, while getAll() returns an array.

--shadow-colors will be a space-separated list of colors which can be pulled into an array. We’ll register this with the browser later so it knows what to expect.

We also need to specify what color to fill the rectangle with. It will use the same background color as the element:

ctx.fillStyle = props.get("background-color").toString();

As mentioned earlier, arguments come into the module as an array, and we reference them by index. They’re of the type CSSStyleValue right now — let’s make it easier to iterate through them:

  1. Convert the CSSStyleValue into a string with its toString() method
  2. Split the result on spaces with a regex
const blurArray = args[2].toString().split(/s+/);
const xArray = args[0].toString().split(/s+/);
const yArray = args[1].toString().split(/s+/);
// e.g. ‘1 2 3’ -> [‘1’, ‘2’, ‘3’]

Drawing the shadows

Now that we have the dimensions and properties, it’s time to draw something! Since we need a shadow for each item in shadowColors, we’ll loop through them. Start with a forEach() loop:

shadowColors.forEach((shadowColor, index) => { 
});

With the index of the array, we’ll grab the matching values from the X, Y, and blur arguments:

shadowColors.forEach((shadowColor, index) => {
  ctx.shadowOffsetX = xArray[index];
  ctx.shadowOffsetY = yArray[index];
  ctx.shadowBlur = blurArray[index];
  ctx.shadowColor = shadowColor.toString();
});

Finally, we’ll use the fillRect() method to draw in the canvas. It takes four arguments: X position, Y position, width, and height. For the position values, we’ll use border-width from inputProperties; this way, the border-image is clipped to contain just the shadow around the rectangle.

shadowColors.forEach((shadowColor, index) => {
  ctx.shadowOffsetX = xArray[index];
  ctx.shadowOffsetY = yArray[index];
  ctx.shadowBlur = blurArray[index];
  ctx.shadowColor = shadowColor.toString();

  ctx.fillRect(borderWidth, borderWidth, width, height);
});

This technique can also be done using a canvas drop-shadow filter and a single rectangle. It’s supported in Chrome, Edge, and Firefox, but not Safari. See a finished example on CodePen.

Almost there! There are just a few more steps to wire things up.

Registering the paint module

We first need to register our module as a paint worklet with the browser. This is done back in our main JavaScript file:

CSS.paintWorklet.addModule("https://codepen.io/steve_fulghum/pen/bGevbzm.js");
https://codepen.io/steve_fulghum/pen/BazexJX

Registering custom properties

Something else we should do, but isn’t strictly necessary, is to tell the browser a little more about our custom properties by registering them.

Registering properties gives them a type. We want the browser to know that --shadow-colors is a list of actual colors, not just a string.

If you need to target browsers that don’t support the Properties and Values API, don’t despair! Custom properties can still be read by the paint module, even if not registered. However, they will be treated as unparsed values, which are effectively strings. You’ll need to add your own parsing logic.

Like addModule(), this is added to the main JavaScript file:

CSS.registerProperty({
  name: "--shadow-colors",
  syntax: "<color>+",
  initialValue: "black",
  inherits: false
});

You can also use @property in your stylesheet to register properties. You can read a brief explanation on MDN.

Applying this to border-image

Our worklet is now registered with the browser, and we can call the paint method in our main CSS file to take the place of an image URL:

border-image-source: paint(shadow, 0 0 0, 8 2 1, 8 5 3) 15;
border-image-slice: 15;

These are unitless values. Since we’re drawing a 1:1 image, they equate to pixels.

Adapting to display ratios

We’re almost done, but there’s one more problem to tackle.

For some of you, things might not look quite as expected. I’ll bet you sprung for the fancy, high DPI monitor, didn’t you? We’ve encountered an issue with the device pixel ratio. The dimensions that have been passed to the paint worklet haven’t been scaled to match.

Rather than go through and scale each value manually, a simple solution is to multiply the border-image-slice value. Here’s how to do it for proper cross-environment display.

First, let’s register a new custom property for CSS that exposes window.devicePixelRatio:

CSS.registerProperty({
  name: "--device-pixel-ratio",
  syntax: "<number>",
  initialValue: window.devicePixelRatio,
  inherits: true
});

Since we’re registering the property, and giving it an initial value, we don’t need to set it on :root because inherit: true passes it down to all elements.

And, last, we’ll multiply our value for border-image-slice with calc():

.foo {
  border-image-slice: calc(15 * var(--device-pixel-ratio));
}

It’s important to note that paint worklets also have access to the devicePixelRatio value by default. You can simply reference it in the class, e.g. console.log(devicePixelRatio).

Finished

Whew! We should now have a properly scaled image being painted in the confines of the border area!

Live demo (best viewed in Chrome and Edge)
Live demo (best viewed in Chrome and Edge)

Bonus: Apply this to a background image

I’d be remiss to not also demonstrate a solution that uses background-image instead of border-image. It’s easy to do with just a few modifications to the module we just wrote.

Since there isn’t a border-width value to use, we’ll make that a custom property:

CSS.registerProperty({
  name: "--shadow-area-width",
  syntax: "<integer>",
  initialValue: "0",
  inherits: false
});

We’ll also have to control the background color with a custom property as well. Since we’re drawing inside the content box, setting an actual background-color will still show behind the background image.

CSS.registerProperty({
  name: "--shadow-rectangle-fill",
  syntax: "<color>",
  initialValue: "#fff",
  inherits: false
});

Then set them on .foo:

.foo {
  --shadow-area-width: 15;
  --shadow-rectangle-fill: #efefef;
}

This time around, paint() gets set on background-image, using the same arguments as we did for border-image:

.foo {
  --shadow-area-width: 15;
  --shadow-rectangle-fill: #efefef;
  background-image: paint(shadow, 0 0 0, 8 2 1, 8 5 3);
}

As expected, this will paint the shadow in the background. However, since background images extend into the padding box, we’ll need to adjust padding so that text doesn’t overlap:

.foo {
  --shadow-area-width: 15;
  --shadow-rectangle-fill: #efefef;
  background-image: paint(shadow, 0 0 0, 8 2 1, 8 5 3);
  padding: 15px;
}

Fallbacks

As we all know, we don’t live in a world where everyone uses the same browser, or has access to the latest and greatest. To make sure they don’t receive a busted layout, let’s consider some fallbacks.

Padding fix

Padding on the parent element will condense the content box to accommodate for shadows that extend from its children.

section.parent {
  padding: 6px; /* size of shadow on child */
}

Margin fix

Margins on child elements can be used for spacing, to keep shadows away from their clipping parents:

div.child {
  margin: 6px; /* size of shadow on self */
}

Combining border-image with a radial gradient

This is a little more off the beaten path than padding or margins, but it’s got great browser support. CSS allows gradients to be used in place of images, so we can use one within a border-image, just like how we did with paint(). This may be a great option as a fallback for the Paint API solution, as long as the design doesn’t require exactly the same shadow:

Gradients can be finicky and tricky to get right, but Geoff Graham has a great article on using them.

div {
  border: 6px solid;
  border-image: radial-gradient(
    white,
    #aaa 0%,
    #fff 80%,
    transparent 100%
  )
  25%;
}

An offset pseudo-element

If you don’t mind some extra markup and CSS positioning, and need an exact shadow, you can also use an inset pseudo-element. Beware the z-index! Depending on the context, it may need to be adjusted.

.foo {
  box-sizing: border-box;
  position: relative;
  width: 300px;
  height: 300px;
  padding: 15px;
}

.foo::before {
  background: #fff;
  bottom: 15px;
  box-shadow: 0px 2px 8px 2px #333;
  content: "";
  display: block;
  left: 15px;
  position: absolute;
  right: 15px;
  top: 15px;
  z-index: -1;
}

Final thoughts

And that, folks, is how you can use the CSS Paint API to paint just the image you need. Is it the first thing to reach for in your next project? Well, that’s for you to decide. Browser support is still forthcoming, but pushing forward.

In all fairness, it may add far more complexity than a simple problem calls for. However, if you’ve got a situation that calls for pixels put right where you want them, the CSS Paint API is a powerful tool to have.

What’s most exciting though, is the opportunity it provides for designers and developers. Drawing shadows is only a small example of what the API can do. With some imagination and ingenuity, all sorts of new designs and interactions are possible.

Further reading



Source link

Chapter 6: Web Design | CSS-Tricks
Strategy

Chapter 6: Web Design | CSS-Tricks


Alec Pollak was little more than a junior art director cranking out print ads when he got a call that would change the path of his career. He worked at advertising agency, Grey Entertainment, later called Grey Group. The agency had spent decades acquiring some of the biggest clients in the industry.

Pollak spent most of his days in the New York office, mocking up designs for magazines and newspapers. Thanks to a knack for computers, a hobby of his, he would get the odd digital assignment or two, working on a multimedia offshoot for an ad campaign. Pollak was on the Internet in the days of BBS. But when he saw the World Wide, the pixels brought to life on his screen by the Mosaic browser, he found a calling.

Sometime in early 1995, he got that phone call. “It was Len Fogge, President of the agency, calling little-old, Junior-Art-Director me,” Pollak would later recall. “He’d heard I was one of the few people in the agency who had an email address.” Fogge was calling because a particularly forward-thinking client (later co-founder of Warner Bros Online Donald Buckley) wanted a website for the upcoming film Batman Forever. The movie’s key demographic — tech-fluent, generally well-to-do comic book aficionados — made it perfect for a web experiment. Fogge was calling Pollak to see if that’s something he could do, build a website. Pollak never had. He knew little about the web other than how to browse it. The offer, however, was too good to pass up. He said, yes, he absolutely could build a website.

Art director Steve McCarron was assigned the project. Pollak had only managed to convince one other employee at Grey of the web’s potential, copywriter Jeffrey Zeldman. McCarron brought the two of them in to work on the site. With little in the way of examples, the trio locked themselves in a room and began to work out what they thought a website should look and feel like. Partnering with a creative team at Grey, and a Perl programmer, they emerged three months later with something cutting edge. The Batman Forever website launched in May of 1995.

The Batman Forever website

When you first came to the site, a moving bat (scripted in Perl by programmer Douglas Rice) flew towards your screen, revealing behind it the website’s home page. It was filled with short, punchy copy and edgy visuals that played on the film’s gothic motifs. The site featured a message board where fans could gather and discuss the film. It had a gallery of videos and images available for download, tiny low-resolution clips and stills from the film. It was packed edge-to-edge with content and easter eggs.

It was hugely successful and influential. At the time, it was visited by just about anyone with a web connection and a browser, Batman fan or not.

Over the next couple of years — a full generation in Internet time — this is how design would work on the web. It would not be a deliberate, top-down process. The web design field would form from blurry edges focused a little at a time. The practice would taken up not by polished professionals but by junior art directors and designers fresh out of college, amateurs with little to lose at the beginning of their careers. In other words, just as outsiders built the web, outsiders would design it.

Interest in the early web required tenacity and personal drive, so it sometimes came from unusual places. Like when Gina Blaber recruited a team inside of O’Reilly nimble and adventurous enough to design GNN from scratch. Or when Walter Isaacson looked for help with Pathfinder and found Chan Suh toiling away at websites deeply embedded in the marketing arm of a Time Warner publication. These weren’t the usual suspects. These were web designers.


Jamie Levy was certainly an outsider, with a massive influence on the practice of design on the web. A product of the San Fernando Valley punk scene, Levy came to New York to attend NYU’s Interactive Telecommunications Program. Even at NYU, a school which had produced some of the most influential artists and filmmakers of the time, Levy stood out. She had a brash attitude and a sharp wit, balanced by an incredible ability to self-motivate and adapt to new technology, and, most importantly, an explosive and immediately recognizable aesthetic.

Levy’s initial resistance to computers as a glorified calculator for shut-ins dropped once she saw what it could do with graphics. After graduating from NYU, Levy brought her experience in the punk scene designing zines — which she had designed, printed and distributed herself — to her multimedia work. One of her first projects was designing a digital magazine called Electric Hollywood using Hypercard, which she loaded and distributed on floppy disks. Levy mixed bold colors and grungy zine-inspired artistry with a clickable, navigable hypertext interface. Years before the web, Levy was building multimedia that felt a lot like what it would become.

Electric Hollywood was enough to cultivate a following. Levy was featured in magazines and in interviews. She also caught the eye of Billy Idol, who recruited her to create graphical interactive liner notes for his latest album, Cyberpunk, distributed with floppys alongside the CD. The album was a critical and commercial failure, but Levy’s reputation among a growing clique of digital designers was cemented.

Still, nothing compared to the first time she saw the web. Levy experienced the World Wide Web — which author Claire Evans describes in her book, Broad Band — “as a conversion.” “Once the browser came out,” Levy would later recall, “I was like, ‘I’m not making fixed-format anymore. I’m learning HTML and that was it.” Levy’s style, which brought the user in to experience her designs on their own terms, was a perfect fit for the web. She began moving her attention to this new medium.

People naturally gravitated towards Levy. She was a fixture in Silicon Alley, the media’s name for the new tech and web scene concentrated in New York City. Within a few years, they would be the ushers of the dot-com boom. In the early ’90’s, they were little more than a scrappy collection of digital designers and programmers and writers; “true believers” in the web, as they called themselves.

Levy was one of their acolytes. She became well known for her Cyber Slacker parties; late-night hangouts where she packed her apartment with a ragtag group of hackers and artists (often with appearances by DJ Spooky). Designers looked to her for inspiration. Many would emulate her work in their own designs. She even had some mainstream appeal. Whenever she graced the covers of major magazines like Esquire and Newsweek, she always had a skateboard or a keyboard in her hands.

It was her near mythic status that brought IT company Icon CMT calling about their new web project, a magazine called Word. The magazine would be Levy’s most ambitious project to date, and where she left her greatest influence on web design. Word would soon become a proving ground for her most impressive design ideas.

Word Magazine

Levy was put in charge of assembling a team. Her first recruit was Marisa Bowe, whom she had met on the Echo messaging board (BBS) run by Stacy Horn, based in New York. Bowe was originally brought on as a managing editor. But when editor in chief Jonathan Van Meter left before the project even got off the ground, Bowe was put in charge of the site’s editorial vision.

Levy found a spiritual partner in Bowe, having come to the web with a similar ethos and passion. Bowe would become a large part of defining the voice and tone that was so integral to the webzine revolution of the ’90’s. She had knack for locating authentic stories, and Word’s content was often, as Bowe called it “first-person memoirs.” People would take stories from their life and relate it to the cultural topics of the day. And Bowe’s writing and editorial style — edgy, sarcastic, and conversational — would be backed by the radical design choices of Levy.

Articles that appeared on Word were one-of a kind, where the images, backgrounds and colors chosen helped facilitate the tone of a piece. These art-directed posts pulled from Levy’s signature style, a blend of 8-bit graphics and off-kilter layouts, with the chaotic bricolage of punk rock zines. Pages came alive, representing through design the personality of the post’s author.

Word also became known for experimenting with new technologies almost as soon as they were released. Browsers were still rudimentary in terms of design possibilities, but they didn’t shy away from stretching those possibilities as far as they could go. It was one of the first magazines to use music, carefully paired with the content of the articles. When Levy first encountered what HTML tables could do to create grid-based layouts, she needed to use it immediately. “Everyone said, ‘Oh my God, this is going to change everything,’” she later recalled in an interview, “And I went back to to Word.com and I’d say, ‘We’ve got to do an artistic piece with tables in it.’ Every week there was some new HTML code to exploit.”

The duo was understandably cocky about their work, and with good reason. It would be years before others would catch up to what they did on Word. “Nobody is doing anything as interesting as Word, I wish someone would try and kick our ass,” Levy once bragged. Bowe echoed the sentiment, describing the rest of the web as “like frosting with no cake.” Still, for a lot of designers, their work would serve as inspiration and a template for what was possible. The whole point was to show off a bit.

Levy’s design was inspired by her work in the print world, but it was something separate and new. When she added some audio to a page, or painted a background with garish colors, she did so to augment its content. The artistry was the point. Things might have been a bit hard to find, a bit confusing, on Word. But that was ok. The joy of the site was discovering its design. Levy left the project before its first anniversary, but the pop art style would continue on the site under new creative director Yoshi Sodeoka. And as the years went on, others would try to capture the same radical spirit.

A couple of years later, Ben Benjamin would step away from his more humdrum work at CNet to create a more personal venture known as Superbad, a mix of offbeat, banal content and charged visuals created a place of exploration. There was no central navigation or anchor to the experience. One could simply click and find what they find next.

The early web also saw its most avant-garde movement in the form of Net.art, a loose community of digital artists pushing their experiments into cyberspace. Net artists exploited digital artifacts to create works of interactive works of art. For instance, Olia Lialina created visual narratives that used hypertext to glue together animated panels and prose. The collective Jodi.org, on the other hand, made a website that looked like complete gibberish, hiding its true content in the source code of the page itself.

These were the extreme examples. But they served in creating a version of the web that felt unrefined. Web work, therefore, was handed to newcomers and subordinates to figure out.

And so the web became defined, by definition, by a class of people that were willing to experiment — basically, it was twenty-somethings fresh out of college, in Silicon Valley, Silicon Alley, and everywhere in between who wrote the very first rules of web design. Some, like Levy and the team at Grey, pulled from their graphic design roots. Others tried something completely new.

There was no canvas, only the blaring white screen of a blank code editor. There was no guide, only bits of data streaming around the world.

But not for long.


In January of 1996, two web design books were published. The first was called Designing for the Web, by Jennifer Robbins, one of the original designers on the GNN team. Robbins had compiled months of notes about web design into a how-to guide for newbies. The second, designing web graphics, was written by Lynda Weinman, by then already owner of the eponymous web tutorial site Lynda.com. Weinman brought her experience in the film industry and with animation to bring a visual language to her practical guide to the web in a fusion of abstract thoughts on a new medium and concrete tips for new designers.

At the time, there were technical manuals and code guides, but few publications truly dedicated to design. Robbins and Weinman provided a much needed foundation.

Six months later, a third book was published, Creating Killer Websites, written by Dave Siegel. It was a very different kind of book. It began with a thesis. The newest generation of websites, what Siegel referred to as third generation sites, needed to guide visitors through their experiences. They needed to be interactive, familiar, and engaging. To achieve this level of interactivity, Siegel argues, required more than what the web platform could provide. What follows from this thesis is a book of programming hacks, ways to use HTML in ways it wasn’t strictly made for. Siegel popularized techniques that would soon become a de facto standard, using HTML tables and spacer GIFs to create advanced layouts, and using images to display heading fonts and visual backgrounds.

The publishing cadence of 1996 makes a good case study for the state and future of web design. The themes and messages of the books illustrate two points very well.

The first is the maturity of web design as a practice. The books published at the beginning of the year drew on predecessors — including Robbins from her time as a print designer, and Lynda from her work in animation — to help contextualize and codify the emerging field of web design. Six months later, that codification was already being expanded and made repeatable by writers like Siegel.

The second point it illustrates is a tension that was beginning to form. In the next few years, designers would begin to hone their craft. The basic layouts and structures of a page would become standardized. New best practices would be catalogued in dozens of new books. Web design would become a more mature practice, an industry all of its own.

But browsers were imperfect and HTML was limited. Coding the intricate designs of Word or Superbad required a bit of creative thinking. Alongside the sophistication of the web design field would follow a string of techniques and tools aimed at correcting browser limitations. These would cause problems later, but in the moment, they gave designers freedom. The history of web design is interweaved with this push and pull between freedom and constraint.


In March of 1995, Netscape introduced a new feature to version 1.1 of Netscape Navigator. It was called server push and it could be used to stream data back and forth between a server and a browser, updated dynamically. Its most common use was thought to be real-time data without refreshes, like a moving stock ticker or an updating news widget. But it could also be used for animation.

On the day that server push was released, there were two websites that used it. The first was the Netscape homepage. The second was a site with a single, animated bouncing blue dot. This produced its name: TheBlueDot.com.

TheBlueDot.com

The animation, and the site, were created by Craig Kanarick, who had worked long into the night the day before Netscape’s update release to have it ready for Day One. Designer Clay Shirky would later describe the first time he saw Kanarick’s animation: “We were sitting around looking at it and were just […] up until that point, in our minds, we had been absolutely cock of the walk. We knew of no one else who was doing design as well as Agency. The Blue Dot came up, and we wanted to hate it, but we looked at it and said, ‘Wow, this is really good.’

Kanarick would soon be elevated from a disciple of Silicon Alley to a dot-com legend. Along with his childhood friend Jeff Dachis, Kanarick created Razorfish, one of the earliest examples of a digital agency. Some of the web’s most influential early designers would begin their careers at Razorfish. As more sites came online, clients would come to Razorfish for fresh takes on design. The agency responded with a distinct style and mindset that permeated through all of their projects.

Jonathan Nelson, on the other hand, had only a vague idea for a nightclub when he moved to San Francisco. Nelson worked with a high school friend, Jonathan Steuer on a way to fuse an online community with a brick and mortar club. They were soon joined by Brian Behlendorf, a recent Berkeley grad with a mailing list of San Francisco rave-goers, and unique experiences for a still very new and untested World Wide Web.

Steuer’s day job was at Wired. He got Nelson and Behlendorf jobs there, working on the digital infrastructure of the magazine, while they worked out their idea for their club. By the time the idea for HotWired began to circulate, Behlendorf had earned himself a promotion. He worked as chief engineer on the project, directly under Steuer.

Nelson was getting restless. The nightclub idea was ill-defined and getting no traction. The web was beginning to pass him by, and he wanted to be part of it. Nelson was joined by his brother and programmer Cliff Skolnick to create an agency of their own. One that would build websites for money. Behlendorf agreed to join as well, splitting his time between HotWired and this new company.

Nelson leased an office one floor above Wired and the newly formed Organic Online began to try and recruit their first clients.

When HotWired eventually launched, it had sold advertising to half a dozen partners. Advertisers were willing to pay a few bucks to have proximity to the brand of cool that HotWired was peddling. None of them, however, had websites. HotWired needed people to build the ads that would be displayed on their site, but they also needed to build the actual websites the ads would link to. For the ads, they used Razorfish. For the brand microsites, they used Organic Online. And suddenly, there were web design experts.


Within the next few years, the practice of web design would go through radical changes. The amateurs and upstarts that had built the web with their fresh perspective and newcomer instincts would soon consolidate into formal enterprises. They created agencies like Organic and Razorfish, but also Agency.com, Modem Media, CKS, Site Specific, and dozens of others. These agencies had little influence on the advertising industry as a whole, at least initially. Even CKS, maybe the most popular agency in Silicon Valley earned what one writer noted, was the equivalent of “in one year what Madison Avenue’s best-known ad slingers collect in just seven days.”

On the other end, the web design community was soon filled by freelancers and smaller agencies. The multi-million dollar dot-com contracts might have gone to the trendy digital agencies, but there were plenty of businesses that needed a website for a lot less.

These needs were met by a cottage industry of designers, developers, web hosts, and strategists. Many of them collected web experience the same way Kanarick and Levy and Nelson and Behlendorf had — on their own and through trial and error. But ad-hoc experimentation could only go so far. It didn’t make sense for each designer to have to re-learn web design. Shortcuts and techniques were shared. Rules were written. And web design trod on more familiar territory.

The Blue Dot launched in 1995. That’s the same year that Word and the Batman Forever sites launched. They were joined that same year by Amazon and eBay, a realization of the commercial potential of the web. By the end of the year, more traditional corporations planted their flag on the web. Websites for Disney and Apple and Coca Cola were followed by hundreds and then thousands of brands and businesses from around the world.

Levy had the freedom to design her pages with an idiosyncratic brush. She used the language of the web to communicate meaning and re-inforce her magazine’s editorial style. New websites, however, had a different value proposition. In most cases, they were there for customers. To sell something, sometimes directly or sometimes indirectly through marketing and advertising. In either case, they needed a website that was clear. Simple. Familiar. To accomodate the needs of business, commerce, and marketing online, the web design industry turned to recognizable techniques.

Starting in 1996, design practice somewhat standardized around common features. The primary elements on a page — the navigation and header — smoothed out from site to site. The stylistic flourishes in layout, color, and use of images from the early web replaced by best practices and common structure. Designers drew on the work of one another and began to create repeatable patterns. The result was a web that, though less visually distinct, was easier to navigate. Like signposts alongside a road, the patterns of the web became familiar to those that used it.

In 1997, a couple of years after the launch of Batman Forever, Jeffrey Zeldman created the mailing list (and later website) A List Apart, to begin circulating web design tutorials and topics. It was just one of a growing list of web designers that rushed to fill the vacuum of knowledge surrounding web design. Web design tutorials blanketed the proto-blogosphere of mailing lists and websites. A near limitless hypertext library of techniques and tips and code examples was available to anyone that looked hard enough for it. Through that blanket distribution of expertise, came new web design methodologies.

Writing a decade after the launch of A List Apart, in 2007, designer Jeffrey Zeldman defined web design as “the creation of digital environments that facilitate and encourage human activity; reflect or adapt to individual voices and content; and change gracefully over time while always retaining their identity.” Zeldman here advocates for merging a familiar interface with brand identity to create predictable, but still stylized, experiences. It’s a shift in thinking from the website as an expression of its creator’s aesthetic, to a utility centered on the user.

This philosophical shift was balanced by a technical one. The two largest browsers, Microsoft and Netscape, vied for market control. They often introduced new capabilities — customizations to colors or backgrounds or fonts or layouts unique to a single browser. That made it hard for designers to create websites that looked the same in both browsers. Designers were forced to resort to fragile code (one could never be too sure if it would work the same the next day), or to turn to tools to smooth out these differences.

Visual editors, Microsoft Frontpage and Macromedia Dreamweaver and a few others, were the first to try and right the ship of design. They gave designers a way to create websites without any code at all. Websites could be built with just the movement of a mouse. In the same way you might use a paintbrush or a drawing tool in Photoshop or MS Paint, one could drag and drop a website into being. The process even got an acronym. WYSIWYG, or “What You See Is What You Get.”

The web, a dynamic medium in its best incarnation, required more frequent updates than designers were sometimes able to do. Writers wanted greater control over the content of their sites, but they were often forced to call the site administrator to make updates. Developers worked out a way to separate the content from how it was output to the screen and store it in a separate database. This led to the development of the first Content Management Systems, or CMS. Using a CMS, an editor or writer could log into a special section of their website, and use simple form fields to update the content of the site. There were even rudimentary WYSIWYG tools baked right in.

Without the CMS, the web would never have been able to keep pace with the blogging revolution or the democratization of publishing that was somewhat borne out in the following decade. But database rendered content and WYSIWYG editors introduced uniformity out of necessity. There were only so many options that could be given to designers. Content in a CMS was inserted into pre-fabricated layouts and templates. Visual editors focused on delivering the most useful and common patterns designers used in their website.


In 1998, PBS Online unveiled a brand new version of its website. At the center of it all was a brand new section, “TeacherSource”: a repository of supplemental materials custom-made for educators to use in their classrooms. In the time since PBS first launched its website three years earlier, they had created a thriving online destination — especially for kids and educators. They had tens of thousands of pages worth of content. Two million visitors streamed through the site each day. They had won at the newly created Webby Awards two years in a row. TeacherSource was simply the latest in a long list of digital-only content that enhanced their other media offerings.

The PBS TeacherSource website

Before they began working on TeacherSource, PBS had run some focus groups with teachers. They wanted to understand where they should put their focus. The teachers were asked about the site’s design and content. They didn’t comment much about the way that images were being used, or their creative use of layouts or the designer’s choice of colors. The number one complaint that PBS heard was that it was hard to find things. The menu was confusing, and there was no place to search.

This latest version of PBS had a renewed design, with special attention given to its navigation. In an announcement about the site’s redesign, Cindy Johanson referred to the design’s more understandable navigation menu and in-site search as a “new front door and lots of side doors.”

It’s a useful metaphor; one that designers would often return to. However, it also doubles as a unique indicator of where web design was headed. The visual design of the page was beginning to recede into the background in favor of clarity and understanding.

The more refined — and predictable — practice of design benefited the most important part of a website: the visitor. The surfing habits of web users were becoming more varied. There were simply more websites to browse. A common language, common designs, helped make it easier for visitors to orient themselves as they bounced from one site to the next. What the web lost in visual flourish it gained back in usability. By the next major change in design, this would go by the name User Experience. But not before one final burst of creative expression.


The second version of MONOcrafts.com, launched in 1998, was a revelation. A muted palette and plain photography belied a deeper construction and design. As you navigated the site, its elements danced on the page, text folding out from the side to reveal more information, pages transitioning smoothly from one to the next. One writer described the site as “orderly and monochromatic, geometric and spare. But present, too, is a strikingly lyrical component.”

The MONOcrafts website

There was the slightest bit of friction to the experience, where the menu would move away from your mouse or you would need to wait for a transition to complete before moving from one page to the next. It was a website that was mediative, precise, and technically complex. A website that for all its splendor, contained little more than a description of its purpose and a brief biography of its creator, Yugo Nakamura.

Nakamura began his career as a civil engineer, after studying civil engineering and architecture at Tokyo University. After working several years in the field, he found himself drawn to the screen. The physical world posed too many limitations. He would later state, “I found the simple fact that every experience was determined by the relationship between me and my surroundings, and I realised that I wanted to design the form of that relationship abstractly. That’s why I got into the web.” Drawing on the influences of notable web artists, Nakamura began to create elaborately designed websites under the moniker yugop, both for hire and as a personal passion.

yugop became famous for his expertise in a tool that gave him the freedom of composition and interactivity that had been denied to him in real-world engineering. A tool called Flash.

Flash had three separate lives before it entered the web design community. It began as software created for the pen computing market, a doomed venture which failed before it even got off the ground. From there, it was adapted to the screen as a drawing tool, and finally transformed, in 1996, into a keyframe animation package known as FutureSplash Animator. The software was paired with a new file format and embeddable player, a quirk of the software that would affirm its later success.

Through a combination of good fortune and careful planning, the FutureSplash player was added to browsers. The software’s creator, Jonathan Gay, first turned to Netscape Navigator, adapting the browser’s new plugin architecture to add widespread support for his file format player. A stroke of luck came when Microsoft’s web portal, MSN, had a need to embed streaming videos on its site, a feature for which the FutureSplash player was well-suited. To make sure it could be viewed by everyone, Microsoft baked the player directly into Internet Explorer. Within the span of a few months, FutureSplash went from just another animation tool to an ubiquitous file format playable in 99% of web browsers. By the end of 1996, Macromedia purchased FutureSplash Animator and rebranded it as Flash.

Flash was an animation tool. De facto support in major browsers made it adaptable enough to be a web design tool as well. Designers learned how to recreate the functionality of websites inside of Flash. Rather than relegating a Flash player to tiny corner of a webpage, some practitioners expanded the player to fill the whole screen, creating the very first Flash websites. By the end of 1996, Flash had captivated the web design community. Resources and techniques sprung up to meet the demand. Designers new to the web were met with tutorials and guides on how to build their websites in Flash.

The appeal to designers was its visual interface, drag and drop drawing tools that could be used to create animated navigation, transitions and audiovisual interactivity the web couldn’t support natively. Web design practitioners had been looking for that level of precision and control since HTML tables were introduced. Flash made it not only possible but, compared to HTML, nearly effortless. Using your mouse and your imagination — and very little, if any, code — could lead to sophisticated designs.

Even among the saturation that the new Flash community would soon become, MONOcrafts stood out. It’s use of Flash was playful, but with a definitive structure and flow.

Flash 4 had been released just before Nakamura began working on his site. It included a new scripting language known as ActionScript, which gave designers a way to programmatically add new interactive elements to the page. Nakamura used ActionScript, combined with the other capabilities of Flash, to create elements that would soon be seen on every website (and now feel like ancient relics of a forgotten past).

MONOcrafts was the first time that many web designers saw an animated intro bring them into the site. In the hands of yugop and other Flash experts, it was an elegant (and importantly, brief) introduction to the style and tone of a website. Before long, intros would become interminable, pervasive, and bothersome. So much so, designers would frequently add a “Skip Intro” button to the bottom of their sites. Clicking that button as soon as it appeared became almost a reflex for browsers of the mid-90’s, Flash-dominated web.

Nakamura also made sophisticated use of audio, something possible with ActionScript. Digitally compressed tones and clicks gave the site a natural feel, bringing the users directly into the experience. Before long, sounds would be everywhere, music playing in the background wherever you went. After that, audio elements would become an all but extinct design practice.

And MONOcrafts used transitions, animations, and navigation that truly made it shine. Nakamura, and other Flash experts, created new approaches to transitions and animations, carefully handled and deliberately placed, that would be retooled by designers in thousands of incarnations.

Designers turned to Flash, in part, because they had no other choice. They were the collateral damage of the so-called “Browser Wars” being played out by Netscape and Microsoft. Inconsistent implementations of web technologies like HTML and CSS made them difficult tools to rely on. Flash offered consistency.

This was met by a rise in the need for web clients. Companies with commercial or marketing needs wanted a way to stand out. In the era of Flash design, even e-commerce shopping carts zoomed across the page, and were animated as if in a video game. But the (sometimes excessive) embellishment was the point. There were many designers that felt they were being boxed in by the new rules of design. The outsiders who created the field of web design had graduated to senior positions at the agencies that they had often founded. Some left the industry altogether. They were replaced by a new freshman class as eager to define a new medium as the last. Many of these designers turned to Flash as their creative outlet.

The results were punchy designs applied to the largest brands. “In contrast to the web’s modern, business-like aesthetic, there is something bizarre, almost sentimental, about billion-dollar multinationals producing websites in line with Flash’s worst excess: long loading times, gaudy cartoonish graphics, intrusive sound and incomprehensible purpose,” notes writer Will Bedingfield. For some, Flash design represented summit of possibility for the web, its full potential realized. For others, it was a gaudy nuisance. It’s influence, however, is unquestionable.

Following the rise of Flash in the late 90’s and early 2000’s, the web would see a reset of sorts, one that came back to the foundational web technologies that it began with.


In April of 2000, as a new millennium was solidifying the stakes of the information age, John Allsopp wrote a post for A List Apart entitled “A Dao of Web Design.” It was written at the end of the first era of web design, and at the beginning of a new transformation of the web from a stylistic artifact of its print predecessors to a truly distinct design medium. “What I sense is a real tension between the web as we know it, and the web as it would be. It’s the tension between an existing medium, the printed page, and its child, the web,” Allsopp wrote. “And it’s time to really understand the relationship between the parent and the child, and to let the child go its own way in the world.”

In the post, Allsopp uses the work of Daoism to sketch out ideas around a fluid and flexible web. Designers, for too long, had attempted to assert control over the web medium. It is why they turned to HTML hacks, and later, to Flash. But the web’s fluidity is also its strength, and when embraced, opens up the possibilities for new designs.

Allsopp dedicates the second half of the post to outline several techniques that can aid designers in embracing this new medium. In so doing, he set the stage for concepts that would be essential to web design over the next decade. He talks about accessibility, web standards, and the separation of content and appearance. Five years before the article was written, those concepts were whispered by a barely known community. Ten years earlier, they didn’t even exist. It’s a great illustration of just how far things had come in such a short time.

Allsopp puts a fine point on the struggle and tension that existed on the web for the past decade as he looked to the future. From this tension, however, came a new practice entirely. The practice of web design.

Enjoy learning about web history with stories just like this? Jay is telling the full story of the web, with new stories every 2 weeks. Sign up for his newsletter to catch up on the latest… of what’s past.



Source link