r/graphic_design - How to vectorize from a Procreate image

How to vectorize from a Procreate image : graphic_design

I am a beginning graphic designer with experience in Procreate (iPad). I created a horse drawing for a client who then requested a vector format. I have tried following tutorials using Ai, Capture, Vectornator, etc., but the final vector always shows something similar as seen in the screengrab. Either there are white holes between colors or brown bleeds onto the other colors. I am not trained enough to know what is causing this. Any advice would be appreciated! Is it something with the original Procreate drawing (.png format)? Or some setting in Ai/Capture?

r/graphic_design - How to vectorize from a Procreate image

Source link

Deploying a Simple Golang Web App on Heroku

Deploying a Simple Golang Web App on Heroku

The Go programming language, often referred to as “Golang,” has gained a lot of well-deserved traction in the DevOps community. Many of the most popular tools such as Docker, Kubernetes, and Terraform are written in Go, but it can also be a great choice for building web applications and APIs. 

Go provides you with all the speed and performance of a compiled language, but feels like coding in an interpreted language. This comes down to the great tooling that you get out of the box with a compiler, runner, test suite, and code formatter provided by default. Add to that the robust and easily comprehensible approach to concurrency, to take maximum advantage of today’s multi-core or multi-CPU execution environments, and it’s clear why the Go community has grown so rapidly.

Go feels like it was created with specific goals in mind, leading to a deceptively simple language design that’s straightforward to learn, without compromising on capabilities.

In this post, I’m going to show you how easy it is to develop a simple web application in Go, package it as a lightweight Docker image, and deploy it to Heroku. I’ll also show a fairly new feature of Go: built-in package management.

Go Modules

I’m going to use Go’s built-in module support for this article.

Go 1.0 was released in March 2012. Up until version 1.11 (released in August 2018), developing Go applications involved managing a GOPATH for each “workspace,” analogous to Java’s JAVA_HOME, and all of your Go source code and any third-party libraries were stored below the GOPATH.

I always found this a bit off-putting, compared to developing code in languages like Ruby or JavaScript where I could have a simpler directory structure isolating each project. In both of those languages, a single file (Gemfile for Ruby, package.json for JavaScript) lists all the external libraries, and the package manager keeps track of managing and installing dependencies for me.

I’m not saying you can’t manage the GOPATH environment variable to isolate projects from one another. I particularly find the package manager approach is easier.

Thankfully, Go now has excellent package management built in, so this is no longer a problem. However, you might find GOPATH mentioned in many older blog posts and articles, which can be a little confusing.

Hello, World!

Let’s get started on our web application. As usual, this is going to be a very simple “Hello, World!” app, because I want to focus on the development and deployment process, and keep this article to a reasonable length.


You’ll need:

Go mod init

To create our new project, we need to create a directory for it, and use the go mod init command to initialize it as a Go module.

It’s common practice to use your GitHub username to keep your project names globally unique, and avoid name conflicts with any of your project dependencies, but you can use any name you like.

You’ll see a go.mod file in the directory now. This is where Go will track any project dependencies. If you look at the contents of the file, they should look something like this:

Let’s start committing our changes:


We’re going to use Gin for our web application. Gin is a lightweight web framework, similar to Sinatra for Ruby, Express.js for Javascript, or Flask for Python.

Create a file called hello.go containing this code:

Let’s break this down a little:

This creates a router object, r, using the built-in defaults that come with Gin.

Then, we assign a handler function to be called for any HTTP GET requests to the path /hello, and to return the string “Hello, World!” and a 200 (HTTP OK) status code:

Finally, we start our web server and tell it to listen on port 3000:

To run this code, execute:

You should see output like this:

Now if you visit http://localhost:3000/hello in your web browser, you should see the message “Hello, World!”

Notice that we didn’t have to install Gin separately, or even edit our go.mod file to declare it as a dependency. Go figures that out and makes the necessary changes for us, which is what’s happening when we see these lines in the output:

If you look at the go.mod file, you’ll see it now contains this:

You will also see a go.sum file now. This is a text file containing references to the specific versions of all the package dependencies, and their dependencies, along with a cryptographic hash of the contents of that version of the relevant module.

The go.sum file serves a similar function to package-lock.json for a JavaScript project, or Gemfile.lock in a Ruby project, and you should always check it into version control along with your source code.

Let’s do that now:

Serving HTML and JSON

I’m not going very far into what you can build with Gin, but I do want to demonstrate a little more of its functionality. In particular, sending JSON responses and serving static files.

Let’s look at JSON responses first. Add the following code to your hello.go file, right after the r.GET block:

Here we’re creating a “group” of routes behind the path /api with a single path /ping which will return a JSON response.

With this code in place, run the server with go run and then hit the new API endpoint:

You should get the response:

Finally, let’s make our web server serve static files. Gin has an additional library for this.

Change the import block at the top of the hello.go file to this:

The most popular code editors have Golang support packages you can install which will take care of the import declarations for you automatically, updating them for you whenever you use a new module in your code.

Then, add this line inside the main function:

The full code for our web application now looks like this:


The r.Use(static.Serve... line enables our web server to serve any static files from the views directory, so let’s add a few:



Now restart the web server using go run hello.go and visit http://localhost:3000 and you should see the styled message:


We’ve written our Go web application, now let’s package it up as a Docker image. We could create it as a Heroku buildpack, but one of the nice features of Go is that you can distribute your software as a single binary file. This is an area where Go really shines, and using a Docker-based Heroku deployment lets us take advantage of that. Also, this technique isn’t limited to Go applications: you can use Docker-based deployment to Heroku for projects in any language. So, it’s a good technique to understand.

So far, we’ve been running our code with the go run command. To compile it into a single, executable binary, we simply run:

This will compile all our Go source code and create a single file. By default, the output file will be named according to the module name, so in our case it will be called helloworld.

We can run this:

And we can hit the same HTTP endpoints as before, either with curl or our web browser.

The static files are not compiled into the binary, so if you put the helloworld file in a different directory, it will not find the views directory to serve our HTML and CSS content.

That’s all we need to do to create a binary for whatever platform we’re developing on (in my case, my Mac laptop). However, to run inside a Docker container (for eventual deployment to Heroku) we need to compile a binary for whatever architecture our Docker container will run on.

I’m going to use Alpine Linux, so let’s build our binary on that OS. Create a Dockerfile with the following content:

In this image, we start with the golang base image, add our source code, and run go build to create our helloworld binary.

We can build our Docker image like this:

Don’t forget the trailing . at the end of that command. It tells Docker we want to use the current directory as the build context.

This creates a Docker image with our helloworld binary in it, but it also contains all the Go tools needed to compile our code, and we don’t want any of that in our final image for deployment, because it makes the image unnecessarily large. It can also be a security risk to install unnecessary executables on your Docker images.

We can see the size of our Docker image like this:

For comparison, the alpine image (a lightweight Linux distribution, often used as a base for Docker images) is much smaller:

On my Mac, the helloworld binary is around 14MB, so the Golang image is much bigger than it needs to be.

What we want to do is use this Dockerfile to build our helloworld binary to run on Alpine Linux, then copy the compiled binary into an Alpine base image, without all the extra Golang tools.

We can do this using a “multistage” Docker build. Change the Dockerfile to look like this:

On the first line, we label our initial Docker image AS builder.

Later, we switch to a different base image FROM alpine and then copy the helloworld binary from our builder image like this:

Build the new Docker image:

Now, it’s the size you would expect for a base Alpine image plus our helloworld binary:

We can run our web server from the Docker image like this. (If you have another version running using go run hello.go or ./helloworld, you’ll need to stop that one first, to free up port 3000.)

docker run --rm -p 3000:3000 helloworld

The dockerized webserver should behave just like the go run hello.go and ./helloworld versions except that it has its own copies of the static files. So, if you change any of the files in views/ you won’t see the changes until you rebuild the Docker image and restart the container.

Deploy to Heroku

Now that we have our dockerized web application, let’s deploy it to Heroku. Heroku is a PaaS provider that makes it simple to deploy and host an application. You can set up and deploy your application through the Heroku UI, or through the Heroku CLI. For this example, we’ll use the Heroku command-line application.

Getting PORT From an Environment Variable

We’ve hard-coded our web server to run on port 3000, but that won’t work on Heroku. Instead, we need to alter it to run on whichever port number is specified in the PORT environment variable, which Heroku will supply automatically.

To do this, alter the r.Run line near the bottom of our hello.go file, and remove the ":3000" string value so the line becomes:

The default behavior of Gin is to run on whatever port is in the PORT environment variable (or port 8080 if nothing is specified). This is exactly the behavior Heroku needs.

Setting up Our Heroku app

First, log into Heroku:

Now, create an app:

Tell Heroku we want to build this project using a Dockerfile, rather than a buildpack:

To do this, we also need to create a heroku.yml file like this:

The heroku.yml file is a manifest that defines our app and allows us to specify add-ons and config vars to use during app provisioning. 

Next, add git and commit these files, then push to Heroku to deploy:

My git configuration uses main as the default branch. If your default branch is called master, then run git push heroku master instead.

You should see Heroku building the image from your Dockerfile, and pushing it to the Heroku Docker registry. Once the command completes, you can view the deployed application in your browser by running:


To recap, here’s a summary of what we covered today:

  • Creating a Golang web application, using Go modules and the Gin web framework to serve strings, JSON, and static files.
  • Using a multistage Dockerfile to create a lightweight Docker image.
  • Deploying a Docker-based application to Heroku using heroku stack:set container and a heroku.yml file.

I’ve only scratched the surface in this article on how to use Go to build web applications and APIs. I hope this gives you enough to get started on your own Golang web applications.

Source link

r/graphic_design - Logo Designs

Logo Designs : graphic_design

Hey! I’m rebranding the business I launched in a panic last year. I cant afford a graphic designer and I’ve never done this kind of work (I’m using photoshop and illustrator (and youtube). I’m sure I have a lot to learn. I’m not afraid of critique, I’d love to hear it!!

For sake of time (and ease) I downloaded some templates off creative market, because starting from a blank page was not working.

Brand Identity: I have a wellness business that focuses on redefining self care; away from a commodity driven practice and toward a practice of deepening self awareness through heart based, energetic practices. I use tools like astrology and tarot and I was trying to bring those elements into this design.

The heart is designed to look like its expanding, the border/layout is meant to evoke a playing card/tarot card kind of feeling.

As I said above, any advice is welcome, or a vote on your favorite would be so helpful. I’m not super attached to any of the designs, but I like the direction it is heading (away from a blank page 😉

Thanks for your time!

EDIT: I know none of these are perfect, they all need to be centered and edits in the arrangement but I’m trying to decide which idea to move forward with, and opinions on typography, overall design, placement of objects, etc.

r/graphic_design - Logo Designs

Source link

r/webdev - A FOSS, Clean and Quick Way to Learn Tables (Drilling/Flash Cards)

A FOSS, Clean and Quick Way to Learn Tables (Drilling/Flash …

My little sister had difficulty learning multiplication tables, and we realised that drilling (asking multiple questions rapidly, and moving on quickly) was giving the fastest results. Most other websites or apps that do this are cluttered with ads, so I though I would build a clean no bullshit app that does this.

Its still a work in progress, and the settings don’t work. I was thinking of adding accounts so you can track your progress. Let me know of other suggestions. Hope you guys find it useful 🙂

App: https://www.tabill.tk

Repo: https://github.com/d4mr/tabill

r/webdev - A FOSS, Clean and Quick Way to Learn Tables (Drilling/Flash Cards)
r/webdev - A FOSS, Clean and Quick Way to Learn Tables (Drilling/Flash Cards)

Source link

Netlify Edge Handlers | CSS-Tricks

Netlify Edge Handlers | CSS-Tricks

Netlify Edge Handlers are in Early Access (you can request it), but they are super cool and I think they are worth wrapping your brain around now. I think they change the nature of what Jamstack is and can be.

You know about CDNs. They are global. They host assets geographically close to people so that websites are faster. Netlify does this for everything. The more you can put on a CDN, the better. Jamstack promotes the concept that assets, as well as pre-rendered content, should be on a global CDN. Speed is a major benefit of that.

The mental math with Jamstack and CDNs has traditionally gone like this: I’m making tradeoffs. I’m doing more at build time, rather than at render time, because I want to be on that global CDN for the speed. But in doing so, I’m losing a bit of the dynamic power of using a server. Or, I’m still doing dynamic things, but I’m doing them at render time on the client because I have to.

That math is changing. What Edge Handlers are saying is: you don’t have to make that trade off. You can do dynamic server-y things and stay on the global CDN. Here’s an example.

  1. You have an area of your site at /blog and you’d like it to return recent blog posts which are in a cloud database somewhere. This Edge Handler only needs to run at /blog, so you configure the Edge Handler only to run at that URL.
  2. You write the code to fetch those posts in a JavaScript file and put it at: /edge-handlers/getBlogPosts.js.
  3. Now, when you build and deploy, that code will run — only at that URL — and do its job.

So what kind of JavaScript are you writing? It’s pretty focused. I’d think 95% of the time you’re outright replacing the original response. Like, maybe the HTML for /blog on your site is literally this:

<!DOCTYPE html>
<html lang="en">
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <title>Test a Netlify Edge Function</title>
  <div id="blog-posts"></div>

With an Edge Handler, it’s not particularly difficult to get that original response, make the cloud data call, and replace the guts with blog posts.

export function onRequest(event) {
  event.replaceResponse(async () => {
    // Get the original response HTML
    const originalRequest = await fetch(event.request);
    const originalBody = await originalRequest.text();

    // Get the data
    const cloudRequest = await fetch(
    const data = await cloudRequest.json();

    // Replace the empty div with content
    // Maybe you could use Cheerio or something for more robustness
    const manipulatedResponse = originalBody.replace(
      `<div id="blog-posts"></div>`,
          <a href="https://css-tricks.com/${data[0].link}">${data[0].title.rendered}</a>

    let response = new Response(manipulatedResponse, {
      headers: {
        "content-type": "text/html",
      status: 200,

    return response;

(I’m hitting this site’s REST API as an example cloud data store.)

It’s a lot like a client-side fetch, except instead of manipulating the DOM after request for some data, this is happening before the response even makes it to the browser for the very first time. It’s code running on the CDN itself (“the edge”).

So, this must be slower than pre-rendered CDN content then, because it needs to make an additional network request before responding, right. Well, there is some overhead, but it’s faster than you probably think. The network request is happening on the network itself, so smokin’ fast computers on smokin’ fast networks. Likely, it’ll be a few milliseconds. They are only allowed 50ms of execution time anyway.

I was able to get this all up and running on my account that was granted access. It’s very nice that you can test them locally with:

netlify dev --trafficMesh

…which worked great both in development and deployed.

Anything you console.log() you’ll be able to set in the Netlify dashboard as well:

Here’s a repo with my functioning edge handler.

Source link

On Type Patterns and Style Guides

On Type Patterns and Style Guides

Over the last six years or so, I’ve been using these things I’ve been calling “type patterns” in my web design work, and they’ve worked out pretty well for me. I’ll dig into what they are and how they can make their way into CSS, and maybe the idea will click with you too and help with your day-to-day typography needs.

If you’ve used print design desktop software like QuarkXPress, Adobe InDesign, or CorelDraw, then imagine this idea is an HTML/CSS translation of “paragraph styles.”

When designing a book (that spans hundreds of pages), you might want to change something about the heading typography across the whole book (on the fly). You would define how a certain piece of typography behaves in one central place to be applied across the entire project (a book, in our example). You need control of the patterns.

Most programs use this naming style, but their user interfaces vary a little.

When you pull up the pane, there’s usually a “base” paragraph style that all default text belongs to. From there, you can create as many as you want. Paragraph styles are for “block” level-like elements, and character styles are for “inline” level-like elements, such as bold or unique spans.

The user interface specifics shouldn’t matter — but you can see that there are a lot of controls to define how this text behaves visually. Under the hood, it’s just key: value pairs, which is just like CSS property: value pairs

h1 {
  font-family: "Helvetica Neue", sans-serif; 
  font-size: 20px;
  font-weight: bold;
  color: fuchsia;

Once defined, the styles can be applied to any piece of text. The little + (next to the paragraph style name below) in this case, means that the style definition has changed.

If you want to apply those changes to everything with that paragraph style, then you can “redefine” the style and it will apply project-wide.

When I say it like that, it might make you think: that’s what a CSS class is.

But things are a little more complicated for a website. You never know what size your website will be displayed at (it could be small, like on a mobile device, or giant, like on a desktop monitor, or even on monochrome tablet, who knows), so we need to be contextual with those classes and have then change based on their context.

Styles change as the context changes.

The bare minimum of typographic control

In your very earliest days as a developer, you may have been introduced to semantic HTML, like this:

<h1>Here's some HTML stuff. I'm a heading level 1</h1>
<p>And some more. I'm a paragraph.</p>

<h2>This is a heading level 2</h2>
<p>And some more pragraph stuff.</p>

And that pairs with CSS that targets those elements and applies styles, like this:

h1 {
  font-size: 50px; /* key: value pairs */
  color: #ff0066;

h2 {
  font-size: 32px;
  color: rgba(0,0,0,.8);

p {
  font-size: 16px;
  color: deepskyblue;
  line-height: 1.5;

This works!

You can write rules that target the headers and style them in descending order, so they are biggest > big > medium, and so on.

Headers also already have some styles that we accept as normal, thanks to User Agent styles (i.e. default styles applied to HTML by the browser itself). They are meant to be helpful. They add things like font-weight and margin to the headers, as well as collapsing margins. That way — without any CSS at all — we can rest assured we get at least some basic styling in place to establish a hierarchy. This is beginner-friendly, fallback-friendly… and a good thing!

As you build more complex sites, things get more complicated

You add more pages. More modules. More components. Things start to get more complex. You might start out by adding unique classes and styles for every single little thing, but it’ll catch up to you.

First, you start having classes for special situations:

<h1 class="page-title">
  Be <span class='super-ultra-magic-rainbow'>excellent</span> to each other

<p class="special-mantra">Party on, <em>dudes</em>.</p>

<p>And yeah. I'm just a regular paragraph.</p>

Then, you start having classes everywhere (most CSS methodologies even encourage this):

<header class="site-header">
  <h1 class="page-title">
    Be <span class='ultra-magic-rainbow'>excellent</span> to each other

<main class="page-content">
  <section class="welcome">
    <h2 class="special-mantra">Party on <em>dudes</em></h2>

    <p class="regular-paragraph">And yeah. I'm just regular</p>

Newcomers will try and work around the default font sizes and collapsing margins if they don’t feel confident resetting them.

This is where people start trying out margin-top: -20px… and then stuff gets whacky. As you keep writing more rules to wrangle things in, it will feel like you are “fixing” things instead of declaring how you actually want them to work. It can quickly feel like you are “fighting” the CSS cascade when you’re unaware of the browser’s User Agent styles.

A real-world example

Imagine you’re at your real job and your boss (or the visual designer) gives you this “pixel perfect” Adobe Photoshop document. There are a bunch of colors, layout, and typography.

You open up Photoshop and start to poke around, but there are so many pages and so many different type styles that you’ll need to take inventory and gather what styles can be combined or used in combination.

Would you believe that there are 12 different sizes of type on this page? There’s possibly even more if you also take the line-height into consideration.

It feels great to finish a visual layout and hand it off. However, I can’t tell you how many full days I’ve spent trying to figure out what the heck is happening in a Photoshop document. For example, sometimes small-screens aren’t taken into consideration at all; and when they are, the patterns you find aren’t always shared by each group as they change for different screen types. Some fonts start at 16px and go up to 18px, while others go up to 19px and become italic. How can spot context changes in a static mockup?

Sometimes this is with fervent purpose; other times the visual designer is just going on feel and is happy to round things up and down to create reusable patterns. You’ll have to talk to them about it. But this article is advocating that we talk about it much earlier in the process.

You might get a style guide to reference. But even that might not be specific enough to identify contexts.

Let’s zoom in on one of those guidelines:

We get more content direction than we do behavior of the content in different contexts.

You may even get a formal, but generic, style guide with no pixel sizes or notes on varying screen sizes at all!

Don’t get me wrong: this sort of thing is defintely a nice thought, and it might even be useful for others, like in some client meeting or something. But, as far as front-end development goes, it’s more confusing than helpful. I’ve received very thorough style guides that looked nice and gave lots of excellent guidelines for things like font sizes, but are completely mismatched with the accompanying Photoshop document. On the other end of the spectrum, there are style guides that have unholy amounts of specifics for every type of heading and combination you could ever imagine — and more.

It’s hard to parse this stuff, even with the best of intentions!

Early in your development career, you’d probably assume it’s your job to “figure it out” and get to work, writing down all the pixels and trying your best to make sense of it. Go getem!

But, as you start coding all the specifics, things can get a little overwhelming with the amount of duplication going on. Just look at all the repeated properties going on here:

.blog article p {
  font-family: 'Georgia', serif;
  font-size: 17px;
  line-height: 1.4;
  letter-spacing: 0.02em;
  margin-bottom: 10px;

.welcome .main-message {
  font-family: 'Georgia', serif;
  font-size: 17px;
  line-height: 1.4;
  letter-spacing: 0.02em;
  margin-bottom: 20px;

@media (min-width; 700px) {
  .welcome .main-message {
    font-size: 18px;

.welcome .other-thing {
  font-family: 'Georgia', serif;
  font-size: 17px;
  line-height: 1.4;
  letter-spacing: 0.02em;
  margin-bottom: 20px;

.site-footer .link list a {
  font-family: 'Georgia', serif;
  font-size: 17px;
  line-height: 1.4;
  letter-spacing: 0.02em;
  margin-bottom: 20px;

You might take the common declarations and apply them to the body instead. In smaller projects, that might even be a good way to go. There are ways to use the cascade to your advantage, and others that seem to tie too many things together. Just like in an Object Oriented programming language, you don’t necessarily want everything inheriting everything.

body {
  font-family: 'Georgia', serif;
  font-size: 17px;
  line-height: 1.4;
  letter-spacing: 0.02em;

Things will work out OK. Most of the web was built like this. We’re just looking for something even better.

Dealing with design revisions

One day, there will be a meeting. In that meeting, you’ll find out that the client and the visual designer decided to change a bunch of the typography. Now you need to go back and change it in 293 places in the CSS file. If you get paid by the hour, that might be great!

As you begin to adjust the rules, things will start to conflict. That rule that worked for two things now only works for the one. Or you’ll notice patterns that could now be used in many more places than before. You may even be tempted to just totally delete the CSS and start over! Yikes!

I won’t write it out here, but you’ll try a bunch of different things over time, and people usually come to the conclusion that you can create a class — and add it to the element instead of duplicating rules/declarations for every element. You’ll go even further to try and pull patterns out of the visual designer’s documents. (You might even round a few 19px down to 18px without telling them…)

.standard-text { /* or something */
  font-family: serif;
  font-size: 16px; /* px: up for debate */
  line-height: 1.4; /* no unit: so it's relative to the font-size */
  letter-spacing: 0.02em; /* em: so it's relative to the font-size */

.heading-1 {
  font-family: sans-Serif;
  font-size: 30px;
  line-height: 1.5;
  letter-spacing: 0.03em;

.medium-heading {
  font-family: sans-Serif;
  font-size: 24px;
  line-height: 1.3;
  letter-spacing: 0.04em;

Then you’d apply the class to anything that needs it.

<header class="site-header">
  <h1 class="page-title heading-1">
    Be <mark>excellent</mark> to each other

<main class="page-content">
  <section class="welcome">
    <h2 class="medium-heading">Party on <em>dudes</em></h2>

    <p class="standard-text">And yeah. I'm just regular</p>

This way of doing things can be really helpful for teams that have people of all skill levels changing the HTML. You can plug and play with these CSS classes to get the style you want, even if you’re the new intern.

It’s really helpful to separate the idea of “layout” elements (structure/parents) and the idea of “modules” or “components.” In this way, we’re treating the pieces of text as lower level components.

The key is to keep the typography separate from the layout. You don’t want all .medium-heading elements to have the same margins or colors. It’s going to depend on where they are. This way you are styling based on context. You aren’t ‘using’ the cascade necessarily, but you also aren’t fighting it because you keep the techniques separate.

.site-header {
  padding: 20px 0;

.welcome .medium-heading { /* the context — not the type-pattern */
  margin-bottom: 10px;

This is keeping things reasonable and tidy. This technique is used all over the web.

Working with a CMS

Great, but what about situations where you can’t change the HTML?

You might just be typing up a quick CodePen or a business-card site. In that case, these concerns are going to seem like overkill. On the other hand, you might be working with a CMS where you aren’t sure what is going to happen. You might need to plan for anything and everything that could get thrown at you. In those cases, you’re unable to simply add classes to individual elements. You’re likely to get a dump of HTML from some templating language.

<?php echo getContent()?>

So, if you can’t work with the HTML what can you do?

<article class="post cms-blog-dump">
  <h1>Talking type-patterns on CSS-tricks</h1>
  <p>Intoduction paragraph - and we'd like to style this with a slightly different size font then the next (normal) paragraphs</p>
  <h2>Some headings</h2>
  <h2>And maybe someone accidentally puts 2 headings in a row</h2>
    <li>and some <strong>list</strong></li>
    <li>and here</li>

  <p>Or if a blog post is too boring - then think of a list of bands on an event site. You never know how many there will be or which ones are headlining, so you have to write rules that will handle whatever happens.

You don’t have any control over this markup, so you won’t be able to add classes, meaning that the cool plug-and-play classes you made aren’t going to work! You might just copy and paste them into some larger .article { } class that defines the rules for a kitchen sink. That could work.

What other tools are there to play with?


If you could create some reusable concept of a “type pattern” with Sass, then you could apply those in a similar way to how the classes work.

@mixin my-useful-rule { /* define the mixin */
  background-color: blue;
  color: lime;

.thing {
  @include my-useful-rule(); /* employ the mixin */

/* This compiles to: */
.thing {
  background-color: blue;
  color: lime;
/* (just so that we're on the same page) */

Less, Sass, Stylus and other CSS preprocessors all have their own syntax for this. I’m going to use Sass/SCSS in these examples because it is the most common at the time of writing.

@mixin standard-type() { /* define the mixin */
  font-family: Serif;
  font-size: 16px;
  line-height: 1.4;
  letter-spacing: 0.02em;

.context .thing {
  @include standard-type(); /* employ it in context */

You can use heading-1() and heading-2() and a lot of big-name style guides do this. But what if you want to use those type styles on something that isn’t a “heading”? I personally don’t end up connecting the headings with “size” and I use the patterns in all sorts of different places. Sometimes my headings are “mean” and “stout.” They can be piercing red and uppercase with the same x-height as the paragraph text.

Instead, I define the visual styles in association with how I want the “voice” of the content to come across. This also helps the team discuss “tone” and other content strategy stuff across disciplines.

For example, in Jason Santa Maria’s book, On Web Typography, he talks about “type for a moment” and “type to live with.” There’s type to grab your attention and break up the page, and then those paragraphs to settle into. Instead of .standard-text or .normal-font, I’ve been enjoying the idea of organizing styles by voice. This is all that type that a user should spend time consuming. It’s what I’d likely use for most paragraphs and lists, and I won’t set it on the body.

@mixin calm-voice() { /* define the mixin */
  font-family: Serif;
  font-size: 16px;
  line-height: 1.4;
  letter-spacing: 0.02em;

@mixin loud-voice() {
  font-family: Sans-Serif;
  font-size: 30px;
  line-height: 1.5;
  letter-spacing: 0.03em;

@mixin attention-voice() {
  font-family: Sans-Serif;
  font-size: 24px;
  line-height: 1.3;
  letter-spacing: 0.04em;

This idea of “voices” helps me keep things meaningful because it requires you to name things by the context. The name heading-1b, for example, doesn’t help me connect to the content to any sort of storytelling or to the other people on the team.

Now to style the mystery article. I’ll be using SCSS syntax here:

article {
  padding: 20px 0;
  h1 {
    @include loud-voice();
    margin-bottom: 20px;
  h2 {
    @include attention-voice();
    margin-bottom: 20px;
  p {
    @include calm-voice();
    margin-bottom: 10px;

Pretty, right?

But it’s not that easy, is it? No. It’s a little more complicated because you don’t know what might be above or below each other — or what might be left out, because articles aren’t always structured or organized the same. Those CMS authors can put whatever they want in there! Three <h3> elements in a row? You have to prepare for lots of possible outcomes.

/* Styles */
article {
  padding: 20px 0;

  h1 {
    @include loud-voice();

  h2 {
    @include attention-voice();

  p {
    @include calm-voice();

    &:first-of-type {
      background: lightgreen;
      padding: 1em;

  ol {
    @include styled-ordered-list();

  * + * {
    margin-top: 1em 

To see the regular CSS you can always “View Compiled” in CodePen.

Some people are really happy with the lobotomized owl approach (* + *) but I usually end up writing explicit rules for “any <h2> that comes after a paragraph” and getting really detailed. After all, it’s the written content that everyone wants to read… and I really only need to dial it in one time in one place.

/* Slightly more filled out pattern */
@mixin calm-voice() {
  font-family: serif;
  font-size: 16px;
  line-height: 1.4;
  letter-spacing: 0.02em;
  max-width: 80ch;

  strong {
    font-weight: 700;

  em {
    font-style: italic;

  mark {
    background-color: wheat;

  sup {
    /* maybe? */

  a {
    text-decoration: underline;
    color: $highlight;

  @media (min-width: 600px) {
    font-size: 17px;


It’s nice to think about the “ideal” workflow. What could browsers implement that would make this fun and play well with their future systems?

Here’s an example of the most stripped down preprocessor syntax:

  font-family: serif
  font-size: 16px
  line-height: 1.4
  letter-spacing: 0.02em


I’ll be honest… I love Stylus. I love writing it, and I love using it for examples. It keeps people on their toes. It’s so fast to type in CodePen! If you already have your own little framework of styles like this, it’s insane how fast you can build UI. But! The tooling has fallen behind and at this point, I can’t recommend that anyone use it.

I only add it here because it’s important to dream. We have what we have, but what if you could invent a new way of writing it? That’s a crucial part of the conversation too. If you can’t describe what you want, you wont get it.

We’re here: Type Patterns

Can you see where all of this is going?

You can use these “patterns” or “mixins’ or “whatever” you want to call them and plug and play. It’s fun. And you can totally combine it with the utility class idea too (if you must).

.calm-voice {
  @include calm-voice();
<p class="calm-voice">Welcome to this code snippet!</p>

Style guides

If you can start to share a common language and break down the barriers between “creatives” and “coders,” then everyone can work with these type patterns in mind from the start.

Sometimes you can simply publish a style guide as a “brand” subdomain or directly on the site, like at /style-guide. There are tons of these floating around on the web. The point is that some style guides are standalone, and others are built into the site itself. Wherever they go, they are considered “live” and they allow you to develop things in one place that take effect globally, and use the guide itself as a sort of artifact.

By building live style guides with type patterns as a core and shared concept, everyone will have more fun and save time trying to figure out what each other means. It’s not just for big companies either.

Just be mindful when looking at style guides for other organizations. Style guides serve different purposes depending on who is using them and how, so simply diving into someone else’s work could actually contribute more confusion.

Known unknowns

Sometimes you don’t know what the content will be. That means CMS stuff, but also logic. What if you have a list of bands and events and you are building a module full of conditional components? There might be one band… or five… or two co-headliners. The event might be cancelled!

When I was trying to work out some templating ideas for Ticketfly, I separated the concerns of layout and type patterns.

Variable sized font patterns

Some patterns change sizes at various breakpoints.

@mixin attention-voice() {
  font-family: Serif;
  font-size: 20px;
  line-height: 1.4;
  letter-spacing: 0.02em;
  @media (min-width: 700px) {
    font-size: 24px;
  @media (min-width: 1100px) {
    font-size: 30px;

I used to do something like this and it had a few yucky side effects. For example, what if you plug and play based on the breakpoints and there are sizes that conflict or slip through?

clamp() and vmin units to the rescue!

@mixin attention-voice() {
  font-family: Serif;
  font-size: clamp(18px, 10vmin, 100px);
  line-height: 1.4;
  letter-spacing: 0.02em;

Now, in theory, you could even jump between the patterns with no rub.

.context {
  @include calm-voice();
  @media (min-width: 840px) {
    @include some-other-voice();

But now you have to make sure that they both have the same properties and that they don’t conflict with or bleed through to others! And think about the new variable font options too. Sometimes you’ll want your heading to be a little less heavy on smaller screens or a little taller to work with the space.

Aren’t you duplicating style rules all over the place?

Yikes! You caught me. I care way more about making the authoring and maintaining process pleasurable than I care about CSS byte size. I’m conflicted though. I get it. But I also think that the solution should happen somewhere else in the pipeline. There’s a reason that we don’t write machine code by hand.

Sometimes you have to examine a programming pattern closely and really poke at it, trying it in every place you can to prove how useful it is. Humor me for a movement and look at how you’d use this type pattern and other mixins to style a common “card” interface.

In a way, type patterns are like the utility class style of Bootstrap or Tailwind. But these are human readable. The patterns are added in the CSS instead of the HTML. The concept is the same. I like to think that anyone could look at the living style guide and jump into a component and style it. What do you think?

It is creating more CSS though. Those kilobytes are stacking up. But I think we should work towards something ideal instead of just “possible.” We can probably build something that works this out at build time. I’ll have to learn more about the CSSOM and there are many new things coming down the pipeline that could make this possible without a preprocessor.

It’s bigger than just the CSS. It’s about the people.

Having a set of patterns for type in your project allows visual designers to focus on their magic. More and more, we are building fast and in the browser. Visual designers focus on feel and typography and color with simple frameworks, like Style Tiles. Then developers can organize the data, resource structures and layouts, and everyone can work at the same time. We can have clearer communication and shared understanding of the entire process. We are all UX designers.

When living style guides are created as a team, there’s a lot less need for pixel-pushing. Visual designers can actually spend more time thinking and trying ideas in prototypes, and less time mocking out unnecessary production art. This allows you to work on your brand as a whole and have one single source of truth. It even helps with really small sites, like this.

Gold Collective style guide

InDesign and Illustrator have always had “paragraph styles” and “character styles” for this reason, but they don’t take into account variable screen sizes.

Toss in a few padding type sizes/ratios, some colors and some line widths. It doesn’t have to really be “pixel perfect” but more of a collection of patterns that work together. Colors as variables and maybe some $thick, $thin, or $pad*2 type conventions can help streamline design patterns.

You can find your inspiration in the graphics program, then jump straight to the live style guide. People of all backgrounds can start playing with the styles in a CodePen and dial them in across devices.

In the end, you’ll decide the details on real devices — together, as a team.

Source link

r/graphic_design - How to increase the resolution of a logo?

How to increase the resolution of a logo? : graphic_design

I am working with a logo, and the client was only able to provide me with a fairly low-res file. I am trying to find a way to increase the resolution but am having trouble.

I’ve found a few tutorials explaining how to do this in either Illustrator or Photoshop, but the main issue seems to be that the logo I am working with has some really skinny fonts, and the methods they are teaching always have some kind of error with that section of the logo. I have attached a different logo that looks somewhat similar to the one I am working with (would rather not provide the actual one for privacy reasons) but the real logo is fairly similar and has both shapes and skinny text.

Does anyone know a good process for doing this or would be able to point me towards any good tutorials? Thanks!

r/graphic_design - How to increase the resolution of a logo?

Source link

Zato as a Python API gateway architecture.

What Is an API Gateway?

In this article, we are going to use Zato in its capacity as a multi-protocol Python API gateway – we will integrate a few popular technologies, accepting requests sent over protocols commonly used in frontend systems, enriching and passing them to backend systems, and returning responses to the API clients using their preferred data formats. But first, let’s define what an API gateway is.

Clearing up the Terminology

Although we will be focusing on complex API integrations later, to understand the term API gateway we first need to give proper consideration to the very term gateway.

What comes to mind when we hear the word “gateway,” and what is correct etymologically indeed, is an opening in an otherwise impermissible barrier. We use a gateway to access that which is in other circumstances inaccessible for various reasons. We use it to leave such a place too.

In fact, both “gate” and the verb “to go” stem from the same basic root, and that, again, brings to mind a notion of passing through space specifically set aside for the purpose of granting access to what normally would be unavailable. And, once more, when we depart from such an area, we use a gateway too.

From the perspective of its true intended purpose, a gateway letting everyone in and out as they are would amount to little more than a hole in a wall. In other words, a gateway without a gate is not the whole story.

Yes, there is undoubtedly an immense aesthetic gratification to be drawn from being close to marvels of architecture that virtually all medieval or Renaissance gates and gateways represent, but we know that, contemporarily, they do not function to the fullest of their capacities as originally intended.

Rather, we can intuitively say that a gateway is in service as a means of entry and departure if it lets its operators achieve the following, though not necessarily all at the same time, depending on one’s particular needs:

  • Telling arrivals where they are, including projection of might and self-confidence.
  • Confirming that arrivals are who they say they are.
  • Checking if their port of origin is friendly or not.
  • Checking if they are allowed to enter that which is protected.
  • Directing them to specific areas behind the gateway.
  • Keeping a long term and short term log of arrivals.
  • Answering potential questions right by the gate, if answers are known to gatekeepers.
  • Cooperating with translators and coordinators that let arrivals make use of what is required during their stay.

We can now recognize that a gateway operates on the border of what is internal and external and in itself, it is a relatively narrow, though possibly deep, piece of architecture. It is narrow because it is only through the gateway that entry is possible but it may be deeper or not, depending on how much it should offer to arrivals.

We also keep in mind that there may very well be more than a single gateway in existence at a time, each potentially dedicated to different purposes, some overlapping, some not.

Finally, it is crucial to remember that gateways are structural, architectural elements – what a gateway should do and how it should do it is a decision left to architects.

With all of that in mind, it is easy to transfer our understanding of what a physical gateway is into what an API one should be.

  • API clients should be presented with clear information that they are entering a restricted area.
  • Source IP addresses or their equivalents should be checked and requests rejected if an IP address or equivalent information is not among the allowed ones.
  • Usernames, passwords, API keys, and similar representations of what they are should be checked by the gateway.
  • Permissions to access backend systems should be checked to see as not every API client should have access to everything.
  • Requests should be dispatched to relevant backend systems.
  • Requests and responses should be logged in various formats, some meant to be read by programs and applications, some by human operators.
  • If applicable, responses can be served from the gateway’s cache, taking the burden off the shoulders of the backend systems.
  • Requests and responses can be transformed or enriched which potentially means contacting multiple backend systems before an API caller receives a response.

We can now define an API gateway as an element of a systems architecture that is certainly related to security, permissions, and granting or rejecting access to backend systems, applications, and data sources. On top of it, it may provide audit, data transformation, and caching services. The definition will be always fluid to a degree, depending on an architect’s vision, but this is what can be expected from it nevertheless.

Having defined what an API gateway is, let’s create one in Zato and Python.

Clients and Backend Systems

In this article, we will integrate two frontend systems and one backend application. Frontend ones will use REST and WebSockets, whereas the backend ones will use AMQP. Zato will act as an API gateway between them all.

Zato as a Python API gateway architecture.

Not granting frontend API clients direct access to backend systems is usually a good idea because the dynamics involved in the creation of systems on either side are typically very different. But, they still need to communicate and hence the usage of Zato as an API gateway.

Python Code

First, let’s show the Python code that is needed to integrate the systems into our architecture.

# -*- coding: utf-8 -*-

# Zato
from zato.server.service import Service

class APIGateway(Service):    
    """ Dispatches requests to backend systems, enriching them along the way.   
    name = 'api.gateway'

    def handle(self):

        # Enrich incoming request with metadata ..
        self.request.payload['_receiver'] = self.name        
        self.request.payload['_correlation_id'] = self.cid        
        self.request.payload['_date_received'] = self.time.utcnow()

        # .. AMQP configuration ..
        outconn = 'My Backend'        
        exchange = '/incoming'       
        routing_key = 'api'

        # .. publish the message to an AMQP broker ..
        self.out.amqp.send(data, outconn, exchange, routing_key)

        # .. and return a response to our API client.
        self.response.payload = {'result': 'OK, data accepted'}

There are a couple of points of interest including:

  • The gateway service enriches incoming requests with metadata but it could very well enrich it with business data too, e.g. it could communicate with yet another system to obtain required information and only then pass the request to the final backend system(s).
  • In its current form, we send all the information to AMQP brokers only but we could just as well send it to other systems, possibly modifying the requests along the way.
  • The code is very abstract and all of its current configurations could be moved to a config file, Redis, or another data source to make it even more high-level.
  • Security configuration and other details are not declared directly in the body of the gateway service but they need to exist somewhere – we will describe it in the next section.


In Zato, API clients access the platform’s services using channels. Let’s create a channel for REST and WebSockets then.

First, REST:

REST screenshot.

Now, WebSockets:

Websocket screenshot.

We create a new outgoing AMQP connection in the same way as so:

Outgoing AMQP connection screenshot.

Using the API Gateway

At this point, the gateway is ready; you can invoke it from REST or WebSockets, and any JSON data it receives will be processed by the gateway service, the AMQP broker will receive it, and API clients will have replies from the gateway as JSON responses.

Let’s use curl to invoke the REST channel with JSON payload on input.

  $ curl http://api:<password-here>@localhost:11223/api/v1/user ; echo  
  curl --data-binary @request.json http://localhost:11223/api/v1/user ; echo  
  {"result": "OK, data accepted"}  

Taken together, the channels and the service allowed us to achieve the following:

  • Multiple API clients can access the backend AMQP systems, each client using its own preferred technology.
  • Client credentials are checked on input before the service starts to process requests (authentication).
  • It is possible to assign RBAC roles to clients, in this way ensuring they have access only to selected parts of the backend API (authorization).
  • Message logs keep track of data incoming and outgoing.
  • Responses from channels can be cached which lessens the burden put on the shoulders of backend systems.
  • Services accepting requests are free to modify, enrich, and transform the data in any way required by business logic. E.g., in the code above we only add metadata but we could as well reach out to other applications before requests are sent to the intended recipients.

We can take it further. For instance, the gateway service is currently completely oblivious to the actual content of the requests.

But, since we just have a regular Python dict in self.request.payload, we can with no effort modify the service to dispatch requests to different backend systems, depending on what the request contains, or possibly what other backend systems decide the destination should be.

Such additional logic is specific to each environment or project which is why it is not shown here, and this is also why we end the article at this point, but the central part of it all is already done. The rest is only a matter of customization and plugging in more channels for API clients or outgoing connections for backend systems.

Finally, it is perfectly fine to split access to systems among multiple gateways; each may handle requests from selected technologies on the one hand, but on the other hand, each may use different caching or rate-limiting policies. If there is more than one, it may be easier to configure such details on a per-gateway basis.

Learn More

If you are interested in building scalable and reusable API systems, you can start now by visiting the Zato main page, familiarizing yourself with the extensive documentation, or by going straight to the first part of the tutorial.

Be sure to visit our Twitter, GitHub, and Gitter communities too!

Source link

What Is Application Modernization? - DZone Web Dev

What Is Application Modernization? – DZone Web Dev

The world is undergoing significant digitization today. Almost every business, irrespective of its industry, is digitizing its entire business model by shifting it to apps. Multiple sectors are undergoing digital transformation to efficiently handle their businesses and make themselves more accessible to their clients. According to Statista, global spending on digital transformation will reach USD 2.3 trillion by 2023.

Cloud computing plays a considerable role in digital business transformation happening at such a large scale, especially in the pandemic-hit 2020. It has provided people with cost-effective solutions to manage everything electronically without worrying about the basic infrastructure of their applications and the responsibilities that come with it. According to research, the cloud computing market has grown up to USD 371.4 billion in 2020 and is expected to grow up to USD 832.1 billion by 2025.

The statistics show how people are rapidly moving towards modern business solutions with cloud computing. However, there are still many businesses out there working on legacy systems and facing many problems with their functioning and maintenance. Hence, organizations are also looking for ways to modernize their existing applications and software to meet today’s high-tech standards. This is where application modernization steps in.

In 2020, the app modernization market’s value is estimated to be USD 11.4 billion and is expected to grow up to USD 24.8 billion by 2025. But what exactly is app modernization? Why is it so important and why are people opting for it in such high numbers?

In this article, we will discuss app modernization thoroughly by answering the following questions:

  1. What is application modernization?
  2. Why is application modernization important?
  3. What are the benefits of application modernization?
  4. What are the various approaches to application modernization?
  5. What are some of the key technologies for application modernization?
  6. How can you ensure successful application modernization?

 Let’s begin by learning the meaning of application modernization.

What Is Application Modernization?

The process of modernizing existing legacy applications is known as application modernization. A legacy application is any outdated application. It still works but is incompatible with the current operating systems, information technology infrastructures, and browsers. Application modernization involves modernizing:

  • The internal architecture of applications
  • The app’s platform infrastructure
  • The app’s existing features

App modernization mainly deals with shifting on-premises applications to a cloud architecture. It also works on bringing legacy applications into release patterns like microservices DevOps. The old applications can be re-hosted and new features can be added to them. It is essential to choose the right strategies and approaches to modernize outdated applications. Successful application modernization is only possible when the features planned to add to the application are suitable for it.

App modernization offers several benefits, which make it all the more important to modernize applications. Let’s take a look at them.

Why Is Application Modernization Important?

Legacy app modernization is vital because they are complicated to update and expensive to scale. Also known as monolithic applications, their architecture makes it difficult to add any new features while adding complexities in the scaling process. The components are not independent of each other; hence, to scale a single component, the entire app needs to be scaled. This approach demands the investment of a lot of added costs and unnecessary efforts.

Benefits of Application Modernization

Along with allowing developers to scale and deploy the app’s components independently, application modernization offers many benefits that make it even more important to perform:

  • Cost-effective
  • Business agility and increased staff productivity
  • Enhanced customer experience
  • New revenue streams
  • Better security
  • Meeting compliances
  • Automation

Let’s discuss them in detail.


Maintaining legacy applications is not easy. Along with additional efforts and labor, a lot of money is also invested regularly in order to maintain the applications. The on-site data centers, regular IT maintenance, usage of old systems, and capital expenditures on licensing lead to many added expenses for an organization.

Application modernization helps organizations by reducing operational costs required to update applications. App modernization requires monolithic apps to shift to a cloud infrastructure, eliminating the requirement of on-site data centers. Everything is stored and maintained on the cloud. Hence, the added IT maintenance expenses and licensing costs are also cut down.

Business Agility and Increased Staff Productivity

Legacy systems and monolithic applications have a vast set of disadvantages that slow down business operations and cause employees to spend most of their time dealing with those issues. Organizations tend to lose many clients due to the lack of agility and productivity, which root back to incompetent systems most of the time.

Business agility is an expected outcome when the system is upgraded. Employees perform better with a modern application that is more consistent and reliable. Modernized cloud-based applications take the employees’ minds off the maintenance issues and facilitate remote working capabilities. Hence, more productivity and business agility are achieved.

Enhanced Customer Experience

Good customer experience is a critical factor in determining every business’ success. However, providing a good customer experience while maintaining an old legacy system is very difficult. 

By shifting your applications to a cloud infrastructure, the burden of maintaining old applications and guiding clients through them is lifted off. Businesses can quickly retrieve any records and data they require. Customers can also easily navigate their way around modern applications. Hence, customer relationships and support are also improved along with their experience.

New Revenue Streams

Often, businesses aren’t able to provide new functionalities and services to their clients as their systems aren’t flexible. However, app modernization may allow you to create new services that provide value to your clients due to the system’s flexibility and cost-effectiveness. Hence, new revenue streams can be generated by application modernization.

Better Security

Cybersecurity is a very prevalent concern today, especially with outdated systems. The importance of systems with high-security measures is increasing day by day. However, with legacy applications, cybersecurity concerns grow as the implemented security measures need continuous monitoring and repeated updates.

With application modernization, you can:

  • Use the newest libraries and strategies to tighten security
  • Seamlessly integrate security measures
  • Avail complete advantage of cloud-based security
  • Minimize cybersecurity threats
  • Avoid continuous monitoring
  • Save time as regular updates to security features are automatically done

Hence, application modernization offers much better security as compared to legacy applications.

Meeting Compliances

There are several compliances which every business has to follow, depending on the industry they work in. Not meeting compliances properly can cost a company massive amounts of money. However, with legacy systems, it is very tough and time-consuming to ensure compliances are met as everything has to be done manually.

App modernization can ease the process of meeting compliances as the processes are automated and regular updates ensure that all compliances are seamlessly met. In case of any breaches, it is easier to deal with them if the system is modernized and not monolithic.


Application modernization usually facilitates automation. Seamless integrations, APIs, meeting compliances, enhanced customer experiences, saving costs, etc., is all possible because of automated procedures. Automation is very costly and tough to achieve in legacy systems.

Now that you understand the benefits of application modernization, let’s look at various approaches that can be adopted to achieve it.

What Are the Various Approaches to Application Modernization?

Multiple approaches can be adopted for the process of application modernization, such as:

  1. The lift and shift approach
  2. Refactoring
  3. The strangler pattern
  4. Replatforming
  5. API integration

Let’s discuss them in detail.

The Lift and Shift Approach

Moving an existing legacy application to a new infrastructure like a public cloud platform is known as the “lift and shift” approach. It is also known as “rehosting,” as the application is being moved to a new infrastructure without any changes to its underlying code or architecture. The approach is not very intensive. However, the suitability of this approach depends on the application being rehosted.


Refactoring, also known as “rewriting” or “restructuring,” is the approach that retools significant parts of the monolithic application’s underlying code. It involves considerable restructuring and rewriting of the existing codebase. The approach is adopted to ensure that the application runs better in a new environment, preferably a cloud infrastructure.

Developers choose this approach if they want to break up the legacy application into smaller pieces, commonly known as microservices. Microservices are used to maximize the cloud infrastructure’s benefits. However, microservices may not be fully independent from the perspective of end-to-end delivery. Developers may also use this approach by decoupling modernization paths for individual macroservices and then enable the strangler pattern.

The Strangler Pattern

The strangler pattern is an approach used to transform a legacy application into microservices incrementally by replacing a specific functionality with a new service.


The replatforming approach acts as the middle ground between the “lift and shift” approach and the “refactoring” approach. Without requiring any significant code changes, the replatforming approach involves complementary updates to allow the monolithic application to benefit from the modern cloud platform. The “complementary updates” may consist of replacing or modifying the app’s backend database.

API Integration

API stands for application programming interface. APIs are used for applications that are difficult to move to the cloud infrastructure. New applications can be enabled to access the cloud infrastructure by externalizing them with APIs. Based on integration, this approach involves having an OpenAPIspecified REST interface so that the interface can be discovered and managed. By implementing the API integration approach in the early stages of the application modernization lifecycle, a lot of time, effort, and money can be saved, which would have otherwise been invested in the migration of the application to a cloud infrastructure.

Now that you know about the different approaches to app modernization, let’s look at some of the key technologies required for the application modernization process

What Are Some of the Key Technologies for Application Modernization?

Some key technologies required for application modernization are:

  • Cloud computing
  • Kubernetes
  • Containers

Let’s discuss them in detail.

Cloud Computing

Cloud computing is the fundamental technology used for application modernization, as the process mainly refers to the shift of a monolithic application to a modern cloud environment. It involves:

  • Public clouds
  • Private clouds
  • Hybrid clouds

Even though the public cloud is an essential part of any modernization strategy, private and hybrid/multicloud strategies are also crucial for security, latency, and architectural reasons. If an organization is not ready to move to a public cloud, the other cloud models can help solve the issues and meet the organization’s requirements.


Containers are used to package, deploy, and operate applications in a cloud-centric way. Containerization allows an app to be packaged consistently and in a lightweight manner to run across desktop, cloud, or on-premises environments steadily. It offers multiple benefits, such as:

  • Scalability
  • Portability
  • Operational efficiency


Also known as “kube” or “k8s”, Kubernetes is an open-source platform for container orchestration. It automates deployment, management and scaling of applications.

Containers and Kubernetes have come up as key enablers for hybrid cloud and application modernization strategies.

Now, let’s see how you can ensure successful application modernization.

How Can You Ensure Successful Application Modernization?

Successful app modernization can be ensured by taking the following steps:

  1. Figure out your end goals and KPIs
  2. Analyze thoroughly
  3. Choose a fitting modernization approach
  4. Embrace new technologies
  5. Address technical debt
  6. Strengthen the test automation
  7. Hire the right team

Let’s discuss them in detail.

Figure Out Your End Goals and KPIs

In order to successfully modernize applications, you need to know what you are expecting out of the process. Hence, you must first figure out your end goals and KPIs (key performance indicators). While your end goals define why you want to modernize your apps, the KPIs act as a checklist that helps you ensure that modernization enables you to achieve your goals.

For example, if your end goal is to provide enhanced customer experiences, then your KPIs would include application error rates, application response time, uptime, etc.

Thorough Analysis

You should thoroughly know and analyze your legacy systems and their interdependencies before you modernize them so that you don’t lose any critical data in the process. You must know how your application performs presently so that you can plan out how to modernize and make data-driven decisions accordingly.

A thorough analysis will help you to systematically approach application modernization without wasting any time and resources. It will also help you to identify loopholes you didn’t know and prioritize approaches according to your requirements. It would help if you thoroughly analyzed the following:

  • Application infrastructure
  • Application quality and performance
  • The impact of your application’s performance on your business

Choose a Fitting Modernization Approach

You should thoroughly know and analyze your legacy systems and their interdependencies before you modernize them so that you don’t lose any critical data in the process. You must know how your application performs presently so that you can plan out how to modernize and make data-driven decisions accordingly.

A thorough analysis will help you to systematically approach application modernization without wasting any time and resources. It will also help you to identify loopholes you didn’t know and prioritize approaches according to your requirements. It would help if you thoroughly analyzed the following:

Embrace New Technologies

Application modernization gives you the opportunity to embrace new technologies while improving the application’s infrastructure. You can build frameworks that support artificial intelligence (AI), machine learning, robotic process automation (RPA), internet of things (IoT), speech recognition systems, etc.

Address Technical Debt

In simple terms, technical debt is the implied cost of additional rework that has to be undertaken because you chose the easier and less time-consuming option instead of the ideal approach, which might have taken a little longer. The need to have applications immediately usually makes people deploy software that isn’t ideal.

While proceeding with app modernization, it is important that you address the existing technical debt and don’t carry it forward.

Strengthen Test Automation

Ensuring high-level test automation helps to baseline the current functionalities. It also assists in regression testing required after modernization and reduces the risk due to changes to a minimum. Also, the testing effort is significantly reduced.

Hire the Right Team

App modernization is a complicated process that requires precision and expertise. Hence, you must hire an experienced team to take over the process. Even a tiny mistake can cost you a lot of money if the process is not done correctly. Hence, always ensure that you hire the right software development team to help you with application modernization.


Application modernization is a complicated yet necessary step required to help you optimize your business. Considering all the benefits that application modernization offers, you must modernize your monolithic application as soon as possible.

Source link