A screenshot of the Sentry interface shows a two-column table with five rows that show the performance metrics in the left column and a brightly colored bar chart illustrating the metric in the right column. The metrics include FIP, FCP, LCP, FID and CLS.
Strategy

Measuring Core Web Vitals with Sentry


Chris made a few notes about Core Web Vitals the other day, explaining why measuring these performance metrics are so gosh darn important:

I still think the Google-devised Core Web Vitals are smart. When I first got into caring about performance, it was all: reduce requests! cache things! Make stuff smaller! And while those are all very related to web performance, they are abstractly related. Actual web performance to users are things like how long did I have to wait to see the content on the page? How long until I can actually interact with the page, like type in a form or click a link? Did things obnoxiously jump around while I was trying to do something? That’s why Core Web Vitals are smart: they measure those things.

There’s certainly a lot of tools out there that help you measure these extremely important metrics. Chris’ post was timely because where I work at Sentry¹, we’re launching our own take on this. My favorite once-a-year-blogger and mentor at Sentry, Dora Chan, explained why using real user data is important when it comes to measuring Web Vitals and how our own product is approaching it:

Google Web Vitals can be viewed using the Google Chrome Lighthouse extension. You may be asking yourself, “Why would I need Sentry to see what these are if I already have Google?” Good question. Google provides synthetic lab data, where you can directly control your environment, device, and network speed. This is good and all, but it only paints part of the picture. Paint. Get it?

Sentry gathers aggregate field data on what your users actually experience when they engage with your application. This includes context on variable network speed, browser, device, region, and so forth. With Web Vitals, we can help you understand what’s happening in the wild, how frequently it’s happening, why, and what else is connected to the behavior.

Sentry breaks down all these transactions into the most important metrics so you can see how your customers are experiencing performance problems. Perhaps 42% of the time a transaction has an input delay of more than 301ms. Sentry would show that problem and how it correlates with other performance issues.

A screenshot of the Sentry interface shows a two-column table with five rows that show the performance metrics in the left column and a brightly colored bar chart illustrating the metric in the right column. The metrics include FIP, FCP, LCP, FID and CLS.

I think this is the power of tying Core Web Vitals with user data — or what Dora calls “field data” — because some of our users are experiencing a fast app! They have great wifi! All the wifis! That’s great and all, but there are still users on the other side who put up with a more miserable experience, and having a visual based on actual user data allows us to see which specific actions are slowing things down. This information is what gives us the confidence to hop into the codebase and then fix the problem, but it also helps prioritize these problems in the first place. That’s something we don’t really talk about when it comes to performance.

What’s the most important performance problem with your app right now? This is a trickier question than we might like to admit. Perhaps a First Paint of five seconds isn’t a dealbreaker on the settings page of your app but three seconds on the checkout page is unbearable for the business and customers alike.

So, yeah, performance problems are weird like that; the same result of a metric can mean different things based on the context. And some metrics are more important than others depending on that context.

That’s really why I’m so excited about all these tools. Viewing how users are experiencing an app in real time and then, subsequently, visualizing how the metrics change over time — that’s just magic. Lighthouse scores are fine and dandy, and, don’t get me wrong, they are very useful. They’re just not an extremely accurate measure of how users actually use your app based on your data.

This is where another Sentry feature comes into play. After you’ve signed up and configured everything, head to the Performance section and you’ll see which transactions are getting better over time and which have regressed, or gotten slower:

A screenshot of the sentry Performance dashboard. There is a dark purple sidebar to the left that acts as the app's navigation and the Performance link is active. The screen displays two charts side-by-side in separate cards that measure most improved transactions and most regressed transactions. Both are line charts with time on the Y-axis and date on the X-axis. A list of highest and lowest performers are displayed beneath each chart respectively.

Tony Xiao is an engineer at Sentry and he wrote about how he used this feature to investigate a front-end problem. That’s right: we use Sentry to measure our Sentry work (whoa, inception). By looking at the Most Regressed Transactions report, Tony was able to dig into the specific transaction that triggered a negative result and identify the problem right then and there. Here’s how he described it:

To a fault, code is loyal to its author. It’s why communicating with your code is critical. And it’s why trends in performance monitoring is so valuable: it not only helps you understand the ups and downs, but it can help point you in the right direction.

Anyway, I’m not really trying to sell you on Sentry here. I’m more interested in how the field of front-end development is changing and I think it’s super exciting that all of these tools in the industry are coming together at this moment in time. It feels like our understanding of performance problems is getting better — the language, the tools, the techniques are all evolving and a tide is turning in our industry.

And that’s something to celebrate.



Source link

demo app
Strategy

How to Convert HTML Tables into Beautiful PDFs


Web apps that contain tables, charts, and graphs often include an option to export the data as a PDF. Have you ever wondered, as a user,  what’s going on under the hood when you click that button?

And as a developer, how do you get the PDF output to look professional? Most free PDF exporters online essentially just convert the HTML content into a PDF without doing any extra formatting, which can make the data hard to read. What if you could also add things like page headers and footers, page numbers, or repeating table column headers? Small touches like these can go a long way toward turning an amateur-looking document into an elegant one.

Recently, I explored several solutions for generating PDFs and built this demo app to showcase the results. All of the code is also available here on Github. Let’s get started! 

Overview of Demo App

demo app
Demo app

Our demo app contains a lengthy styled table and four buttons to export the table as a PDF. The app is built with basic HTML, CSS and vanilla JavaScript, but you could easily create the same output using your UI framework or library of choice.

Each export button generates the PDF using a different approach. Viewing from right to left, the first uses the native browser print functionality. The second uses an open-source library called jsPDF. The third uses another open-source library called pdfmake. And finally, the fourth uses a paid service called DocRaptor.

Let’s dig into each solution one by one.

Native Browser Print Functionality

First off, let’s consider exporting the PDF using the browser’s built-in tools. When viewing any web page, you can easily print the page by right-clicking anywhere and then choosing the Print option from the menu. This opens a dialog for you to choose your print settings. But, you don’t actually have to print the document. The dialog also gives you the option to save the document as a PDF, which is what we’ll do. In JavaScript, the window object exposes a print method, so we can write a simple JavaScript function and attach it to one of our buttons like this:

Here’s the output from Google’s Chrome browser:

PDF exported using the built-in print functionality and the Chrome browser
PDF exported using the built-in print functionality and the Chrome browser

I was pleasantly surprised by the output here. Though it isn’t flashy – the content is just in black and white – the main table styles were kept intact. What’s more, each of the seven pages includes the table column headers and footer, which I assume the browser intelligently picks up due to my choice of semantic HTML in building a properly structured table.

However, I don’t love the extra page metadata that the browser includes in the PDF. Near the top, we see the date and HTML page title. At the bottom of the page we have the website from which this was printed as well as the page number.

If my only goal in saving this document is to see the data, then Chrome does a pretty good job. But, the extra lines of text at the top and bottom of the document, while useful, don’t make it look very professional.

The other thing to note is that the browser’s native print functionality is different from browser to browser. What if we printed this same document using the Safari browser?

Here’s the output:

PDF exported using the built-in print functionality and the Safari browser
PDF exported using the built-in print functionality and the Safari browser

You’ll notice that the table looks roughly the same, and so does the page header and footer content. However, the table column headers and table footer are not repeated! This is somewhat unhelpful since you’d need to refer back to the first page any time you forgot what data any given column contains. The bottom of the table on the first page is also a little cut off, as the browser tries to squeeze in as much content as it can before creating the next page.

So it seems that the browser output isn’t ideal and can vary depending on what browser the user has chosen.

jsPDF

Let’s next consider an open-source library called jsPDF. This library has been around for at least five years and is consistently downloaded over 200,000 times from NPM each week. It’s safe to say that this is a popular and battle-proven library.

jsPDF is also fairly easy to use. You create a new instance of the jsPDF class, give it a reference to the HTML content you want to export, and then provide any other additional settings like page margin size or the document title.

Underneath the hood, jsPDF uses a library called html2canvas. As the name implies, html2canvas takes HTML content and turns it into an image stored on an HTML <canvas> element. jsPDF then takes that canvas content and saves it.

Let’s take a look at the output we get using jsPDF:

PDF exported using jsPDF
PDF exported using jsPDF

At first glance, this looks pretty good! The PDF includes our nice blue headers and striped table row background. It doesn’t contain any of the extra page metadata that the browser print method included.

However, notice what happens between page one and two. The table extends all the way down to the bottom of the first page and then just picks right back up at the top of the second page. There is no extra margin applied, and the table text content has the potential to be cut in half, which is actually what happens between pages six and seven.

The PDF also doesn’t include the repeating table column headers or table footer, which was the same problem we saw in Safari’s print functionality.

While jsPDF is a powerful library, it seems like this tool may work best when the exported content can fit on just one page.

pdfmake

Let’s take a look at our second open-source library, pdfmake. With over 300,000 weekly downloads from NPM and a seven-year lifespan, this library is even more popular and more senior than jsPDF.

While building the export functionality for my demo app, the configuration for pdfmake was considerably harder than it was for jsPDF. The reason for this is that pdfmake constructs the PDF document from scratch using data you provide it rather than converting existing HTML content on the page into a PDF. That means that rather than providing pdfmake with a reference to my HTML table, I had to provide it data for the header, footer, content, and layout of my PDF table. This led to a lot of duplication in my code; I first wrote the table in my HTML and then re-built the table for the PDF export with pdfmake.

The code looks like this:

Before we dismiss pdfmake entirely, let’s take a look at the output:

PDF exported using pdfmake

Not too shabby! We get to include styles for our table, so we can still reproduce the blue column headers and striped table row backgrounds. We also get the repeating table column headers to make it easy to keep track of what data we’re seeing in each column on each page.

pdfmake also allowed me to include a page header and page footer, so it’s easy to add page numbers. You’ll notice though that the table content between page one and page two still isn’t separated perfectly. The page break partially splits the row for 2002 between the pages.

Overall, it seems like pdfmake’s greatest strength is in constructing PDFs from scratch. If, for example, you wanted to generate an invoice based on some order data, and you don’t actually show the invoice on the screen inside of your web app, then pdfmake would be a great choice. 

DocRaptor

The last option we’ll consider is DocRaptor. DocRaptor differs from the first three options we explored in that it is a paid service. It uses the Prince HTML-to-PDF engine underneath the hood to generate its PDF exports. This service is also used via an API, so your code is hitting an external API endpoint which then returns the PDF document.

The basic DocRaptor configuration is fairly simple. You provide it the name of your document, the type of document you want to create ('pdf' in our case), and the HTML content to use. There are hundreds of other options for various configurations depending on what you need, but the basic configuration is an excellent starting point.

Here’s what I used:

Let’s take a look at the PDF export generated by DocRaptor:

PDF exported using DocRaptor

Now there’s a good-looking document! We get to keep our nice table styles. The table column headers and table footer are repeated on every page. The table rows don’t get cut off, and there is an appropriately sized margin on all sides of the page. The page header is repeated on every page as well, and so are the page numbers at the bottom of each page.

To create the header and footer text, DocRaptor recommends you use some CSS with the @page selector, like this:

When it comes to the PDF output, DocRaptor is the clear winner.

(As an added bonus, check out what a full-bleed styled HTML header can look like!)

Conclusion

So, which option do you choose for your app? If you want the simplest solution and don’t need a professional-looking document, the native browser print functionality should be just fine. If you need more control over the PDF output, then you’ll want to use a library.

jsPDF shines when it comes to single-page content generated based on HTML shown in the UI. pdfmake works best when generating PDF content from data rather than from HTML. DocRaptor is the most powerful of them all with its simple API and its beautiful PDF output. But again, unlike the others, it is a paid service. However, if your business depends on elegant, professional document generation, DocRaptor is well worth the cost.



Source link

contract development workflow
Strategy

Contract First Application Development With Events


Introduction

In this post I will go through my demo on how to use the contract first methodology for creating event driven applications. This is going to be a straight up installation instruction, and code with some explanation. I cover most of the basics in the last post. 

This is a simple money transfer application, which receives transfer requests from a restful endpoint.The ultimate goal is to process a single transfer request into transaction records on each account, for better bookkeeping purposes.  The structure of the system is a typical event driven implementation, where you have Topics that are responsible for accepting events, and have services subscribing or publishing to the topics.  In the demo, we have one topic that stores the Transfer Request events, and one stores the Account Record events. 

contract development workflow

Solution

Now, let’s setup the foundation for every EDA. Kafka cluster, go ahead and download Kafka, unzip. 

Start up the Kafka Cluster 

And create the topics

And create the topics needed.

From the last post, we know each topic would have a schema describing its data type, format and serialization/serialization mechanisms. These are stored in the Apicurio Registry.  Start the Apicurio Registry locally with Kafka as the persistence store. 

Start the Apicurio Registry:

After successfully starting the Apicurio Registry, go to your browser http://localhost:8080/ui/artifacts upload the schema for both topics. (Note we are only doing value this time, which is a common case, because keys often have simple value text representation.) 

In the browser upload the Protobuf Schema with name demo-protobuf:

Upload the Avro schema with name demo-avro:

After uploading the two schema, this is what you will see the two schema that describe how each topic would consume the events. 

apicurio registry

Creating Rest to Protobuf camel route:   protobuf camel route

Start by creating a new Camel project

Update the pom.xml file with all the needed dependencies and plugins:

Couple of things I want to highlight here: 

  1. The Apicurio Registry libraries that we are referencing in Kafka configuration, for serial/deserializing data into kafka. (And identify the strategy of what to do with the schema)

  2. The Apicurio Registry maven plugins for downloading Topic schema from the registry. 

  3. The Protobuf  maven plugins that generate POJO from the schema so it’s easier to handle the data in Java. 

Run

to download and generate the POJO from Protobuf schema, notice a TransactionProtos.java will appear in the source folder. Now you know the SCHEMA of what the topic is accepting.

Add the following route in the MyRouteBuilder.java

  1. Accepting as REST endpoint 

  2. Simply maps the input JSON streams into Protobuf using the Camel protobuf components

  3.  Sent to the webtrans Kafka topic that serializes Pojo using the Apicurio  Protobuf libraries. 

With configuration in the application.properties file

Start the application: 

Move on to the second camel application, this application picks up the transfer request and splits it into two account records. This time we get two contracts to satisfy. 

apicurio registry

Start by creating another new Camel project 

Update the pom.xml file with all the needed dependencies and plugins:

Highlights: 

  1. The Apicurio Registry libraries that we are referencing in Kafka configuration, for serial/deserializing data into kafka. (And identify the strategy of what to do with the schema)

  2. The Apicurio Registry maven plugins for downloading Topic schema from the registry. This time we are downloading two schemas for two endpoints

  3. The Protobuf  maven plugins that generate POJO from the schema so it’s easier to handle the data in Java. 

  4. The Avro maven plugins to generate POJO from the schema. 

Run

to download and generate the POJO from Protobuf and AVRO schema. In the source code folder, you know the SCHEMA of what the topic is sending through the protobuf TransactionProtos.java  POJO and the SCHEMA of what the topic is accepting through AVRO Transaction.java POJO.

Add the following route in the MyRouteBuilder.java

  1. Subscribing to the Kafka topic 

  2. Convert the byte stream into the POJO. 

  3. Simply work with the POJO and fill in the values needed. 

  4. Sent to the transrec Kafka topic that serializes Pojo using the Apicurio AVRO serialization libraries. 

With configuration in the application.properties file, 

Start the application: 

The third one is similar to the others, it subscribes to the account transaction topic, and places the record into MongoDB with the account name as key. 

apicurio registry

Login into MongoDB, and currently nothing in the database, 

Since the steps in POM and properties are similar, I won’t repeat it again. But you can find the pom file and configuration file here in my repo. 

Taking a look at the simple MyRouteBuilder.java in this application, 

  1. Subscribing to the Kafka topic , deserialized with Avro deserializer.

  2. Convert the input stream into String

  3. Since the stream is a valid JSON, I can directly send it to Mongo components in Camel. 

Start the application: 

At the point, you are ready to send the transfer request: 

You will be able to see the result in the MongoDB

That is it! Find the example repo here

See the demo in action.

 

For the concept, visit my previous blog: 

https://dzone.com/articles/contract-first-development-the-event-driven-way



Source link

Jetpack Backup | CSS-Tricks
Strategy

Jetpack Backup | CSS-Tricks


It’s no secret that CSS-Tricks is a WordPress site. And as such, we like to keep things WordPress-y, like enabling the block editor and creating some custom blocks. We also rely on Jetpack for a number of things, which if you haven’t tried, is definitely worth your time as it’s become a linchpin of this site for everything from search and security scans to social integrations and it’s own set of awesome blocks, like the slideshow block.

But one powerful feature we haven’t talked much about is Jetpack Backup and whoo-boy is it awesome. Sure, code is pretty easy to back up — we’ve got GitHub for that. But what about your assets, like images and other files? Or, gosh, what about the database? These things are super important and losing them would be, well, devastating!

Enter Jetpack Backup. It copies all that stuff, offering two plans, including one for daily backups and the other for real-time backups. Most sites can probably get away with daily backups, but it’s nice to know there’s a real-time option, especially if you’re running an active site, like forums where updates happen regularly, or some eCommerce shop where restoring lost orders would be crucial.

Another thing that makes Jetpack Backup great: it’s sold à la carte. So if backups are all you want from Jetpack, then you can get just that and that alone. But if you need additional features, like all the ones we use around here, then they’re easily accessible and enabled with a few clicks.

You even get a little activity log which is nice to not just see what’s happening on your site, but because it’s another way to pinpoint where things might have gone wrong.

Ugh, Geoff screwing everything up as per usual. ?

So, yeah, check it out! If you want a deep dive into how it all works, here’s Chris walking through our setup.



Source link

Side-by-side screenshots of how Pages, Google Docs and WordPress display tracked changes.
Strategy

Copyediting with Semantic HTML | CSS-Tricks


Tracking changes is a quintessential copyediting feature for comparing versions of content. While we’re used to tracking changes in a word processing document, we actually have HTML elements capable of that. There are a lot of elements that we can use for this process. The main ones we’ll look at are <del>, <ins> and <mark>. But, as we’ll see, pairing them with other elements — including <u>, <aside> and custom markup — we can get the same sort of visual tracking changes features as something like Word, Google Docs, or even WordPress.

Side-by-side screenshots of how Pages, Google Docs and WordPress display tracked changes.
Different apps have different ways of tracking changes.

Let’s start with the <ins> element.

The <ins> designates text that should be or has been inserted. The verb tense gets a little wonky here because while the <ins> tag is suggesting an edit, it has to have, by virtue of being in the <ins> tag, already been inserted. It’s sorta like saying, “Hey, insert this things that’s technically already there.”

Notice how the browser underlines the inserted text for us. It’s nice to have that sort of visual indication, even if it could be mistaken as an underline using the <u> element, a link, or the CSS text-decoration property.

Let’s pair the insertion with the <del> element, which suggests text that should be or has been deleted.

The browser styles <del> like a strikethrough (<s>) element, but they mean different things. <del> is for content that should be removed/edited out (like that creepy seeming section above) while <s> is for content that’s no longer true or inaccurate (like the letter writer’s belief that that section would be endearing).

OK, great, so we have these semantic HTML elements and they produce some light visual indicators for content that is either inserted or deleted. But there’s something you might not know about these elements: they accept a cite  attribute that can be used to annotate the change.

cite takes a properly formatted URL that provides points somewhere to find the reasons why the change was made. That somewhere could even be an anchor on the existing page.

That’s cool, but one issue is that the citation URL isn’t actually visible or clickable. We could use some CSS magic to display it. But even then, it still won’t take you to the citation when clicked… nor can it be copied. 

That said, it does make semantically clear what’s part of the edit and what is not. If we wrap <ins> and <del> in a link (or even the other way around) it still is not clear whether the link is supposed to be part of the edited content or not.

But! There’s a second attribute that <ins> and <del> both share: datetime. And this is how we can indicate when an edit was made. Again, this is not immediately available to a user, but it keeps semantically clear what is part of the edit and what isn’t. 

HTML’s datetime format, as a machine readable date and time, requires precision and can thus be a bit, well, cranky, But it’s general tenants aren’t too hard. It’s worth noting though that, while datetime is used on other elements, such as <time>, formatting the value in a way that doesn’t include at least a specific day, month, and year on <ins> and <del> would be problematic, obscuring the date and time of an edit rather than provide clarity.

We can make things clearer with a little more CSS magic. For example, we can reveal the datetime value on hover:

Checkboxes work too:

But good editing is far more than simply adding and deleting content. It’s asking questions and figuring out what the heck the author intended. (For me personally, it’s also about saving me from embarrassing spellling and grammar mistooks).

So, meet the <mark> element.

<mark> points out text of special interest to the reader. It usually renders as a yellow background behind the content. 

If you’re the editor and want to write a note to the writer (let’s name that person Stanley Meagher) with suggestions to make Stanly’s letter more awesome (or less creepy, at the very least) and that note is large enough to warrant flow content (i.e. block level elements), then the note can be an <aside> element.

<aside class="note">Mr. Meagher, I highly recommend you remove this list of preferred cheeses and replace it with things you love about the woman you are writing to. While I'm sure there are many people for whom your list would be interesting if not welcome, that list rarely includes a romantic interest in the midst of your profession of love. Though, honestly, if she is as perfect for you as you believe, it may be the exact thing you need to test that theory.</aside>

But often you’ll want to do something inline in order to point something out or make a comment about sentence structure or word choice. Unfortunately there’s no baked in way to do that in HTML, but with a little ingenuity and some CSS you can add a note.

<span class="note">Cheesecake isn't really a "cheese"</span>

The <u> element — long an anathema to web developers for fear of confusion with a link — does actually have a use (I know, I was surprised too). It can be used to point out a misspelling (apparently squiggly and red underlines aren’t a standard browser rendering feature). It should still not be used anywhere it might be confused with an actual link and, when used, it definitely should use a color that distinguishes it from links. Red color may be appropriate to indicate an error. 

<p>Please, <u>Lura</u> tell me your answer. Will you wear my mathlete letter jacket?</p>

As we’ve seen throughout this article, the browser’s default styles for the elements we’ve covered so far are certainly helpful but can also be confusing since they are barely distinguishable from other types of content. If a user does not know that the document is showing edits, then the styling may be misconstrued or misunderstood by the user. I’d therefore suggest some additional or alternate styles to help make it clear what’s going on.

ins {
  padding: 0 0.125em;
  text-decoration: none;
  background-color: lightgreen
}
del {
  padding: 0 0.125em;
  text-decoration: none;
  background-color: pink;
}
mark {
  padding: 0 0.125em;
}
.note {
  padding: 0 0.125em;
  background-color: lightblue;
}
aside.note {
  padding: 0.5em 1em;
}
u {
  text-decoration: none;
  border-bottom: 3px red dashed;
}

I ask myself the same question every time I learn something new in HTML: How can I needlessly animate this?

It would be great if we could fade up the changes so that when you clicked a checkbox the edits would fade in as well.

The notes and text in <del> tags can’t be faded in with CSS the same way that background colors and paddings can. Also, display: none  results in no fading at all. Everything pops back in place, including the backgrounds. But using a combining the CSS visibility property with a set height and width value of 0 allows the backgrounds to smoothly fade in.


And there you have it: specifications and a few strategies for keeping track of edits on the web (plus an excellent example of how not to write a love letter (or, perhaps, how to write one so perfect that responding positively to it is a sign you’re soulmates).



Source link

r/webdev - How to make this slider animation?
Strategy

How to make this slider animation? : webdev


I’m trying to make a CSS/ or JS slider animation and was wondering how you guys would accomplish this. Basically, I have to reveal parts of the second slide during the animation from the first slide. The title could completely disappear until the next slide has animated in. I just can’t seem to figure out how to reveal the certain blocks during the animation.

Here’s the animation and the logo that inspired it:

r/webdev - How to make this slider animation?

Slide 01

r/webdev - How to make this slider animation?

Transition (Phase 01)

r/webdev - How to make this slider animation?

Transition (Phase 02)

r/webdev - How to make this slider animation?

Transition (Phase 03)

r/webdev - How to make this slider animation?

Transition (Phase 04)

And here’s the logo that inspired it:

r/webdev - How to make this slider animation?

Logo



Source link

r/webdev - I made Python API Client For Twitter
Strategy

I made Python API Client For Twitter’s Account Activity API …


The API client makes it easier to consume the Accounts Activity API and is available via PIP. The latest alpha prelease makes it easier to get started with the API and consume account activity events. And supports multiple account subscriptions.

r/webdev - I made Python API Client For Twitter's Account Activity API

Here’s the code that demonstrates how easy it use the API Client on the latest alpha prerelease.

r/webdev - I made Python API Client For Twitter's Account Activity API

Want a full walkthrough? Here’s a video demonstration. https://vimeo.com/480115857



Source link

r/graphic_design - 100DaysofDesign 6/100
Strategy

100DaysofDesign 6/100 : graphic_design


I tried re-doing a logo from a place Circle City Coffee. It’s located downtown Indianapolis. I wanted to make something simple, but that also directly correlated to the city of Indy. They’re permanently closed due to COVID. Anyway, I think the font I used is a little to “curly” for the straightness of the city. I also don’t think it really reflects the strength of the city. I wanted it to feel like a professional coffee spot in a sense.

r/graphic_design - 100DaysofDesign 6/100



Source link