Post image

Need help to execute a function before an API call (React) :…

Hi Everyone,

I am trying to do this API call where it’s supposed to get data based on a specific date. The problem is I don’t know where to place the dateFunction() such that it runs before the API call.

Without the dateFunction() the state of fromDate & toDate is empty. Hence the map function prints out all the data in the table which I don’t want.

Post image

Can I put the dateFunction() in componentDidMount() so that it runs before the API call?

Thanks again Redditors! You guys/gals helped me so many times!!!

Source link

CSS-Tricks Chronicle XXXVIII | CSS-Tricks

CSS-Tricks Chronicle XXXVIII | CSS-Tricks

Over at CodePen, we’ve been busier than ever working toward our grand vision of what CodePen can become. We have a ton of focus on things lately, despite this terrible pandemic. It’s nice to be able to stay heads down into work you find important and meaningful in the best of times, and if that can be a mental escape as well, well, I’ll take it.

We’ve been building more community-showcasing features. On our Following page there are no less than three new features: (1) A “Recent” feed¹, (2) a “Top” feed, and (3) Follow suggestions. The Following page should be about 20× more interesting by my calculation! For example, the recent feed is the activity of all the people you follow, surfacing things you likely won’t want to miss.

You can toggle that feed from “Recent” over to “Top.” While that seems like a minor change, it’s actually an entirely different feed that we create that is like a ranked popularity feed, only scoped to people you follow.

Below that is a list of other recommended CodePen folks to follow that’s created just for you. I can testify that CodePen is a lot more fun when you follow people that create things you like, and that’s a fact we’re going to keep making more and more true.

We’re always pushing out little stuff, but while I’m focusing on big new things, the biggest is the fact that we’ve taken some steps toward “Custom Editors.” That is, Pen Editors that can do things that our normal Pen Editor can’t do. We’ve released two: Flutter and Vue Single File Components.

Source link

Real-World Effectiveness of Brotli – CSS Wizardry – Web Perf...

Real-World Effectiveness of Brotli – CSS Wizardry – Web Perf…

Written by on CSS Wizardry.

Table of Contents
  1. Smaller Doesn’t Necessarily Mean Faster
    1. TCP, Packets, and Round Trips
  2. What About the Real World?
    1. Running the Tests
  3. Findings
  4. First-Party vs. Third-Party
  5. Compression Levels
  6. So What?

One of the more fundamental rules of building fast websites is to optimise your
assets, and where text content such as HTML, CSS, and JS are concerned, we’re
talking about compression.

The de facto text-compression of the web is Gzip, with around 80% of compressed
responses favouring that algorithm, and the remaining 20% use the much newer

Of course, this total of 100% only measures compressible responses that
actually were compressed—there are still many millions of resources that could
or should have been compressed but were not. For a more detailed breakdown of
the numbers, see the
Compression section of
the Web Almanac.

Gzip is tremendously effective. The entire works of Shakespeare weigh in at
5.3MB in plain-text format; after Gzip (compression level 6), that number comes
down to 1.9MB. That’s a 2.8× decrease in file-size with zero loss of data. Nice!

Even better for us, Gzip favours repetition—the more repeated strings found in
a text file, the more effective Gzip can be. This spells great news for the web,
where HTML, CSS, and JS have a very consistent and repetitive syntax.

But, while Gzip is highly effective, it’s old; it was released in 1992 (which
certainly helps explain its ubiquity). 21 years later, in 2013, Google launched
Brotli, a new algorithm that claims even greater improvement than Gzip! That
same 5.2MB Shakespeare compilation comes down to 1.7MB when compressed with
Brotli (compression level 6), giving a 3.1× decrease in file-size. Nicer!

Using Paul Calvano’s Gzip and Brotli Compression Level
, you’re likely to
find that certain files can earn staggering savings by using Brotli over Gzip.
ReactDOM, for example, ends up 27% smaller when compressed with maximum-level
Brotli compression (11) as opposed to with maximum-level Gzip (9).

At all compression levels, Brotli always outperforms Gzip when
compressing ReactDom. At Brotli’s maximum setting, it is 27% more effective than

And speaking purely anecdotally, moving a client of mine from Gzip to Brotli led
to a median file-size saving of 31%.

So, for the last several years, I, along with other performance engineers like
me, have been recommending that our clients move over from Gzip and to Brotli

Browser Support: A brief interlude. While Gzip is so widely
supported that Can I Use doesn’t even list tables for it (This HTTP header is
supported in effectively all browsers (since IE6+, Firefox 2+, Chrome 1+
), Brotli currently enjoys 93.17% worldwide
at the time of writing, which is
huge! That said, if you’re a site of any reasonable size, serving uncompressed
resources to over 6% of your customers might not sit too well with you. Well,
you’re in luck. The way clients advertise their support for a particular
algorithm works in a completely progressive manner, so users who can’t accept
Brotli will simply fall back to Gzip. More on this later.

For the most part, particularly if you’re using a CDN, enabling Brotli should
just be the flick of a switch. It’s certainly that simple in Cloudflare, who
I run CSS Wizardry through. However, a number of clients of mine in the past
couple of years haven’t been quite so lucky. They were either running their own
infrastructure and installing and deploying Brotli everywhere proved
non-trivial, or they were using a CDN who didn’t have readily available support
for the new algorithm.

In instances where we were unable to enable Brotli, we were always left
wondering What if… So, finally, I’ve decided to attempt to quantify the
question: how important is it that we move over to Brotli?

Smaller Doesn’t Necessarily Mean Faster

Usually, sure! Making a file smaller will make it arrive sooner, generally
speaking. But making a file, say, 20% smaller will not make it arrive 20%
earlier. This is because file-size is only one aspect of web performance, and
whatever the file-size is, the resource is still sat on top of a lot of other
factors and constants—latency, packet loss, etc. Put another way, file-size
savings help you to cram data into lower bandwidth, but if you’re latency-bound,
the speed at which those admittedly fewer chunks of data arrive will not change.

TCP, Packets, and Round Trips

Taking a very reductive and simplistic view of how files are transmitted from
server to client, we need to look at TCP. When we receive a file from a sever,
we don’t get the whole file in one go. TCP, upon which HTTP sits, breaks the
file up into segments, or packets. Those packets are sent, in batches, in
order, to the client. They are each acknowledged before the next series of
packets is transferred until the client has all of them, none are left on the
server, and the client can reassemble them into what we might recognise as
a file. Each batch of packets gets sent in a round trip.

Each new TCP connection has no way of knowing what bandwidth it currently has
available to it, nor how reliable the connection is (i.e. packet loss). If the
server tried to send a megabyte’s worth of packets over a connection that only
has capacity for one megabit, it’s going to flood that connection and cause
congestion. Conversely, if it was to try and send one megabit of data over
a connection that had one megabyte available to it, it’s not gaining full
utilisation and lots of capacity is going to waste.

To tackle this little conundrum, TCP utilises a mechanism known as slow start.
Each new TCP connection limits itself to sending just 10 packets of data in its
first round trip. Ten TCP segments equates to roughly 14KB of data. If those ten
segments arrive successfully, the next round trip will carry 20 packets, then
40, 80, 160, and so on. This exponential growth continues until one of two
things happens:

  1. we suffer packet loss, at which point the server will halve whatever the last
    number of attempted packets were and retry, or;
  2. we max out our bandwidth and can run at full capacity.

This simple, elegant strategy manages to balance caution with optimism, and
applies to every new TCP connection that your web application makes.

Put simply: your initial bandwidth capacity on a new TCP connection is only
about 14KB. Or roughly 11.8% of uncompressed ReactDom; 36.94% of Gzipped
ReactDom; or 42.38% of Brotlied ReactDom (both set to maximum compression).

Wait. The leap from 11.8% to 36.94% is pretty notable! But the change from
36.94% to 42.38% is much less impressive. What’s going on?

Round Trips TCP Capacity (KB) Cumulative Transfer (KB) ReactDom Transferred By…
1 14 14  
2 28 42 Gzip (37.904KB), Brotli (33.036KB)
3 56 98  
4 112 210 Uncompressed (118.656KB)
5 224 434  

Both the Gzipped and Brotlied versions of ReactDom fit into the same round-trip
bucket: just two round trips to get the full file transferred. If all round trip
times (RTT) are fairly uniform, this means there’s no difference in transfer
time between Gzip and Brotli here.

The uncompressed version, on the other hand, takes a full two round trips more
to be fully transferred, which—particularly on a high latency connection—could
be quite noticeable.

The point I’m driving at here is that it’s not just about file-size, it’s about
TCP, packets, and round trips. We don’t just want to make files smaller, we
want to make them meaningfully smaller, nudging them into lower round trip

This means that, in theory, for Brotli to be notably more effective than Gzip,
it will need to be able to compress files quite a lot more aggressively so as to
move it beneath round trip thresholds. I’m not sure how well it’s going to stack

It’s worth noting that this model is quite aggressively simplified, and
there are myriad more factors to take into account: is the TCP connection new or
not, is it being used for anything else, is server-side prioritisation
stop-starting transfer, do H/2 streams have exclusive access to bandwidth? This
section is a more academic assessment and should be seen as a good jump-off
point, but consider analysing the data properly in something like Wireshark, and
also read Barry Pollard’s far more forensic
analysis of the magic 14KB in his Critical Resources and the First 14 KB
– A Review

This rule also only applies to brand new TCP connections, and any files fetched
over primed TCP connections will not be subject to slow start. This brings forth
two important points:

  1. Not that it should need repeating: you need to self-host your static
    This is a great way to avoid a slow start penalty, as using your own
    already-warmed up origin means your packets have access to greater bandwidth,
    which brings me to point two;
  2. With exponential growth, you can see just how quickly we reach relatively
    massive bandwidths. The more we use or reuse connections, the further we can
    increase capacity until we top out. Let’s look at a continuation of the above
Round Trips TCP Capacity (KB) Cumulative Transfer (KB)
1 14 14
2 28 42
3 56 98
4 112 210
5 224 434
6 448 882
7 896 1,778
8 1,792 3,570
9 3,584 7,154
10 7,168 14,322
20 7,340,032 14,680,050

By the end of 10 round trips, we have a TCP capacity of 7,168KB and have already
transferred a cumulative 14,322KB. This is more than adequate for casual web
browsing (i.e. not torrenting streaming Game of Thrones). In actual fact,
what usually happens here is that we end up loading the entire web page and all
of its subresources before we even reach the limit of our bandwidth. Put another
way, your 1Gbps fibre line won’t make your day-to-day browsing feel much
faster because most of it isn’t even getting used.

By 20 round trips, we’re theoretically hitting a capacity of 7.34GB.

What About the Real World?

Okay, yeah. That all got a little theoretical and academic. I started off this
whole train of thought because I wanted to see, realistically, what impact
Brotli might have for real websites.

The numbers so far show that the difference between no compression and Gzip are
vast, whereas the difference between Gzip and Brotli are far more modest. This
suggests that while the nothing to Gzip gains will be noticeable, the upgrade
from Gzip to Brotli might perhaps be less impressive.

I took a handful of example sites in which I tried to cover sites that were
a good cross section of:

  • relatively well known (it’s better to use demos that people can
    contextualise), and/or;
  • relevant and suitable for the test (i.e. of a reasonable size (compression is
    more relevant) and not formed predominantly of non-compressible content
    (like, for example YouTube)), and/or;
  • not all multi-billion dollar corporations (let’s use some normal case studies,

With those requirements in place, I grabbed a selection of
and began


I wanted to keep the test simple, so I grabbed only:

  • data transferred, and;
  • First contentful paint (FCP) times;
  • without compression;
  • with Gzip, and;
  • with Brotli.

FCP feels like a real-world and universal enough metric to apply to any site,
because that’s what people are there for—content. Also because Paul
said so, and he’s smart: Brotli
tends to make FCP faster in my experience, especially when the critical CSS/JS
is large.

Running the Tests

Here’s a bit of a dirty secret. A lot of web performance case studies—not all,
but a lot—aren’t based on improvements, but are often extrapolated and inferred
from the opposite: slowdowns. For example, it’s much simpler for the BBC to say
that they lose an additional 10% of
for every
additional second it takes for their site to load
than it is to work out
what happens for a 1s speed-up. It’s much easier to make a site slower, which is
why so many people seem to be so good at it.

With that in mind, I didn’t want to find Gzipped sites and then try and somehow
Brotli them offline. Instead, I took Brotlied websites and turned off Brotli.
I worked back from Brotli to Gzip, then Gzip to to nothing, and measured the
impact that each option had.

Although I can’t exactly hop onto LinkedIn’s servers and disable Brotli, I can
instead choose to request the site from a browser that doesn’t support Brotli.
And although I can’t exactly disable Brotli in Chrome, I can hide from the
server the fact that Brotli is supported. The way a browser advertises its
accepted compression algorithm(s) is via the content-encoding request header,
and using WebPageTest, I can define my own. Easy!

WebPageTest’s advanced feature allows us to set custom request
  • To disable compression entirely: accept-encoding: randomstring.
  • To disable Brotli but use Gzip instead: accept-encoding: gzip.
  • To use Brotli if it’s available (and the browser supports it): leave blank.

I can then verify that things worked as planned by checking for the
corresponding (or lack of) content-encoding header in the response.


As expected, going from nothing to Gzip has massive reward, but going from Gzip
to Brotli was far less impressive. The raw data is available in this Google
but the things we mainly care about are:

  • Gzip size reduction vs. nothing: 73% decrease
  • Gzip FCP improvement vs. nothing: 23.305% decrease
  • Brotli size reduction vs. Gzip: 5.767% decrease
  • Brotli FCP improvement vs. Gzip: 3.462% decrease

All values are median; ‘Size’ refers to HTML, CSS, and JS only.

Gzip made files 72% smaller than not compressing them at all, but Brotli only
saved us an additional 5.7% over that. In terms of FCP, Gzip gave us a 23%
improvement when compared to using nothing at all, but Brotli only gained us an
extra 3.5% on top of that.

While the results to seem to back up the theory, there are a few ways in which
I could have improved the tests. The first would be to use a much larger sample
size, and the other two I shall outline more fully below.

First-Party vs. Third-Party

In my tests, I disabled Brotli across the board and not just for the first party
origin. This means that I wasn’t measuring solely the target’s benefits of using
Brotli, but potentially all of their third parties as well. This only really
becomes of interest to us if a target site has a third party on their critical
path, but it is worth bearing in mind.

Compression Levels

When we talk about compression, we often discuss it in terms of best-case
scenarios: level-9 Gzip and level-11 Brotli. However, it’s unlikely that your
web server is configured in the most optimal way. Apache’s default Gzip
compression level is 6, but Nginx is set to just 1.

Disabling Brotli means we fall back to Gzip, and given how I am testing the
sites, I can’t alter or influence any configurations or fallbacks’
configurations. I mention this because two sites in the test actually got larger
when we enabled Brotli. To me this suggests that their Gzip compression level
was set to a higher value than their Brotli level, making Gzip more effective.

Compression levels are a trade-off. Ideally you’d like to set everything to the
highest setting and be done, but that’s not really practical—the time taken on
the server to do that dynamically would likely nullify the benefits of
compression in the first place. To combat this, we have two options:

  1. use a pragmatic compression level that balances speed and effectiveness
    when dynamically compressing assets, or;
  2. upload precompressed static assets with a much higher compression level,
    and use the first option for dynamic responses.

So What?

It would seem that, realistically, the benefits of Brotli over Gzip are slight.

If enabling Brotli is as simple as flicking a checkbox in the admin panel of
your CDN, please go ahead and do it right now: support is wide enough, fallbacks
are simple, and even minimal improvements are better than none at all.

Where possible, upload precompressed static assets to your web server with the
highest possible compression setting, and use something more middle-ground for
anything dynamic. If you’re running on Nginx, please ensure you aren’t still on
their pitiful default compression level of 1.

However, if you’re faced with the prospect of weeks of engineering, test, and
deployment efforts to get Brotli live, don’t panic too much—just make sure you
have Gzip on everything that you can compress (that includes your .ico and
.ttf files, if you have any).

I guess the short version of the story is that if you haven’t or can’t enable
Brotli, you’re not missing out on much.

Did you enjoy this? Hire me!

Source link

Some Innocent Fun With HTML Video and Progress

Some Innocent Fun With HTML Video and Progress

The idea came while watching a mandatory training video on bullying in the workplace. I can just hear High School Geoff LOL-ing about a wimp like me have to watch that thing.

But here we are.

The video UI was actually lovely, but it was the progress bar that really caught my attention – or rather the [progress].value. It was a simple gradient going from green to blue that grew as the video continued playing.

If only I had this advice in high school…

I already know it’s possible to create the same sort gradient on the <progress> element. Pankaj Parashar demonstrated that in a CSS-Tricks post back in 2016.

I really wanted to mock up something similar but haven’t played around with video all that much. I’m also no expert in JavaScript. But how hard can that actually be? I mean, all I want to do is get know how far we are in the video and use that to update the progress value. Right?

My inner bully made so much fun of me that I decided to give it a shot. It’s not the most complicated thing in the world, but I had some fun with it and wanted to share how I set it up in this demo.

The markup

HTML5 all the way, baby!

  <video id="video" src=""></video>
    <button id="play" aria-label="Play" role="button">►</button>
    <progress id="progress" max="100" value="0">Progress</progress>

The key line is this:

<progress id="progress" max="100" value="0">Progress</progress>

The max attribute tells us we’re working with 100 as the highest value while the value attribute is starting us out at zero. That makes sense since it allows us to think of the video’s progress in terms of a percentage, where 0% is the start and 100% is the end, and where our initial starting point is 0%.


I’m definitely not going to get deep into the process of styling the <progress> element in CSS. The Pankaj post I linked up earlier already does a phenomenal job of that. The CSS we need to paint a gradient on the progress value looks like this:

/* Fallback stuff */
progress[value] {
  appearance: none; /* Needed for Safari */
  border: none; /* Needed for Firefox */
  color: #e52e71; /* Fallback to a solid color */

/* WebKit styles */
progress[value]::-webkit-progress-value {
  background-image: linear-gradient(
    to right,
    #ff8a00, #e52e71
  transition: width 1s linear;

/* Firefox styles */
progress[value]::-moz-progress-bar {
  background-image: -moz-linear-gradient(
    to right,
    #ff8a00, #e52e71

The trick is to pay attention to the various nuances that make it cross-browser compatible. Both WebKit and Mozilla browsers have their own particular ways of handling progress elements. That makes the styling a little verbose but, hey, what can you do?

Getting the progress value from a video

I knew there would be some math involved if I wanted to get the current time of the video and display it as a value expressed as a percentage. And if you thought that being a nerd in high school gained me mathematical superpowers, well, sorry to disappoint.

I had to write down an outline of what I thought needed to happen:

  • Get the current time of the video. We have to know where the video is at if we want to display it as the progress value.
  • Get the video duration. Knowing the video’s length will help express the current time as a percent.
  • Calculate the progress value. Again, we’re working in percents. My dusty algebra textbook tells me the formula is part / whole = % / 100. In the context of the video, we can re-write that as currentTime / duration = progress value.

That gives us all the marching orders we need to get started. In fact, we can start creating variables for the elements we need to select and figure out which properties we need to work with to fill in the equation.

// Variables
const progress = document.getElementById( "progress" );

// Properties
// progress.value = The calculated progress value as a percent of 100
// video.currentTime = The current time of the video in seconds
// video.duration = The length of the video in seconds

Not bad, not bad. Now we need to calculate the progress value by plugging those things into our equation.

function progressLoop() {
  setInterval(function () {
    document.getElementById("progress").value = Math.round(
      (video.currentTime / video.duration) * 100

I’ll admit: I forgot that the equation would result to decimal values. That’s where Math.round() comes into play to update those to the nearest whole integer.

That actually gets the gradient progress bar to animate as the video plays!

I thought I could call this a win and walk away happy. Buuuut, there were a couple of things bugging me. Plus, I was getting errors in the console. No bueno.

Showing the current time

Not a big deal, but certainly a nice-to-have. We can chuck a timer next to the progress bar and count seconds as we go. We already have the data to do it, so all we need is the markup and to hook it up.

Let’s add a wrap the time in a <label> since the <progress> element can have one.

  <video controls id="video" src=""></video>
    <label id="timer" for="progress" role="timer"></label>
    <progress id="progress" max="100" value="0">Progress</progress>

Now we can hook it up. We’ll assign it a variable and use innerHTML to print the current value inside the label.

const progress = document.getElementById("progress");
const timer = document.getElementById( "timer" );

function progressLoop() {
  setInterval(function () {
    progress.value = Math.round((video.currentTime / video.duration) * 100);
    timer.innerHTML = Math.round(video.currentTime) + " seconds";


Hey, that works!

Extra credit would involve converting the timer to display in HH:MM:SS format.

Adding a play button

The fact there there were two UIs going on at the same time sure bugged me. the <video> element has a controls attribute that, when used, shows the video controls, like play, progress, skip, volume, and such. Let’s leave that out.

But that means we need — at the very minimum — to provide a way to start and stop the video. Let’s button that up.

First, add it to the HTML:

  <video id="video" src=""></video>
    <label id="timer" for="progress" role="timer"></label>
    <button id="play" aria-label="Play" role="button">►</button>
    <progress id="progress" max="100" value="0">Progress</progress>

Then, hook it up with a function that toggles the video between play and pause on click.

button = document.getElementById( "play" );

function playPause() { 
  if ( video.paused ) {;
    button.innerHTML = "❙❙";
  else  {
    button.innerHTML = "►";

button.addEventListener( "click", playPause );
video.addEventListener("play", progressLoop);

Hey, it’s still working!

I know it seems weird to take out the rich set of controls that HTML5 offers right out of the box. I probably wouldn’t do that on a real project, but we’re just playing around here.

Cleaning up my ugly spaghetti code

I really want to thank my buddy Neal Fennimore. He took time to look at this with me and offer advice that not only makes the code more legible, but does a much, much better job defining states…

// States
const PAUSED = 'paused';
const PLAYING = 'playing';

// Initial state
let state = PAUSED;

…doing a proper check for the state before triggering the progress function while listening for the play, pause and click events…

// Animation loop
function progressLoop() {
  if(state === PLAYING) {
    progress.value = Math.round( ( video.currentTime / video.duration ) * 100 );
    timer.innerHTML = Math.round( video.currentTime ) + ' seconds';

video.addEventListener('play', onPlay);
video.addEventListener('pause', onPause);
button.addEventListener('click', onClick);

…and even making the animation more performant by replacing setInterval with requestAnimationFrame as you can see highlighted in that same snippet.

Here it is in all its glory!

Oh, and yes: I was working on this while “watching” the training video. And, I aced the quiz at the end, thank you very much. ?

Source link

Post image

Tutorials on typography? : graphic_design

Hi, so ive been wanting to make stuff like this for a long time.

I believe this picture might have been posted on this subreddit awhile ago, but not 100% sure.

i have a few questions before i begin however:

Is this made with illustrator? can it be made with photoshop?

Are the texts they use self created or is it possibly alrdy made texts?

Are there any tutorials for this type of colorful ones cause ive been looking online and i can never seem to find anything that looks like the ones made here?

if there are no tutorials, does anyone have anything that can give me ideas on how to improve on making colors flow as well as this guy did or does everyone use those color wheel websites that tell u what colors go well together?

Thank You.

Post image

Source link

Post image

Trying to put all mapped objects into a single array : webde…

Hey Redditors,

As the title suggests I am trying to put all the objects that I mapped into a single array. The problem is if I keep the arrayIt() within the map function (as shown in figure below) they produce individual arrays packed with their own objects due to the map. But if I keep the arrayIt() outside the map() only the last object gets saved in the array because they are overwritten repeatedly due to map().

Post image
 coordinateValues = () => {
    let data; => {
      data = { date: e.Date, totalCases: e.Cases };

      // Function for putting mapped objects into an array
      const arrayIt = function (...array) {

      xAndYValue: data,

How can I do it so that all the objects get saved into one array?

Thank you very much for your help! Stay safe & happy coding…

Source link

How to give a color to each bar in the hamburger menu icon?

How to give a color to each bar in the hamburger menu icon?

How to give a color to each bar in the hamburger menu icon?

The designer wants the burger icon to look like below

I'm using .fa fa-bars which means I only have 3 bars and can't be colored like that since it's one font awesome icon, is there a way to give him what he wants?

submitted by /u/lynob

Source link

An animation showing using the extension from the VSCode command palette.

I Built a VS Code Extension: Ngrok for VS Code

Over the Easter weekend, a four day weekend characterized by lockdowns all over the world, I decided to use the extra time I had at home to start a new project and learn a new skill. By the end of the weekend, I was proud to release my first VSCode extension: ngrok for VSCode.

What’s That Now?

ngrok is a command-line tool built by Alan Shreve that you can use to expose your localhost server with a publicly available URL. It’s great for sharing access to an application running on your own machine, testing web applications on mobile devices, or testing webhook integrations. For example, I’m a big fan of using ngrok to test my webhooks when I am working with Twilio applications.

VSCode is my current favorite text editor, built by Microsoft, and based on JavaScript (well, mostly TypeScript).

As I was using VSCode last week, I wondered if there was an extension that made it easier to use ngrok. I had a search and found one under development and one that started a web server and ran ngrok. So, I decided to build the extension I wanted to see in the marketplace.

What Does It Do?

With version 1 of the extension, you can start an ngrok tunnel with either a port number or by choosing one of your named tunnels from your ngrok config file. There is one available setting, where you can set a custom path to a config file.

Once a tunnel is running, you can then open the ngrok dashboard or close the tunnel. All the commands are available from the VSCode command palette.

It’s simple so far, but I wanted to keep the scope small and get it released.

An animation showing using the extension from the VSCode command palette.

The code is all open source, and you can find it on GitHub.

What’s Next?

I would love for you to try the extension out, especially if you are already an ngrok user. If it’s useful, then I am looking for feedback, bug reports, and feature requests so I can continue to improve it.

One idea I have already is to provide a Status Bar Item or Tree View that can give more information on and control over currently running ngrok tunnels. I should probably work out how to write tests for the extension too.

What Do You Think?

I really am after feedback, so please install ngrok for VSCode, and let me know what you think on Twitter or via a review on the VSCode Marketplace.

Source link

Alpine.js: The JavaScript Framework That's Used Like jQuery,...

Alpine.js: The JavaScript Framework That’s Used Like jQuery,…

We have big JavaScript frameworks that tons of people already use and like, including React, Vue, Angular, and Svelte. Do we need another JavaScript library? Let’s take a look at Alpine.js and you can decide for yourself. Alpine.js is for developers who aren’t looking to build a single page application (SPA). It’s lightweight (~7kB gzipped) and designed to write markup-driven client-side JavaScript.

The syntax is borrowed from Vue and Angular directive. That means it will feel familiar if you’ve worked with those before. But, again, Alpine.js is not designed to build SPAs, but rather enhance your templates with a little bit of JavaScript.

For example, here’s an Alpine.js demo of an interactive “alert” component.

The alert message is two-way bound to the input using x-model="msg". The “level” of the alert message is set using a reactive level property. The alert displays when when both msg and level have a value.

It’s like a replacement for jQuery and JavaScript, but with declarative rendering

Alpine.js is a Vue template-flavored replacement for jQuery and vanilla JavaScript rather than a React/Vue/Svelte/WhateverFramework competitor.

Since Alpine.js is less than a year old, it can make assumptions about DOM APIs that jQuery cannot. Let’s briefly draw a comparison between the two.

Querying vs. binding

The bulk of jQuery’s size and features comes in the shape of a cross-browser compatibility layer over imperative DOM APIs — this is usually referred to as jQuery Core and sports features that can query the DOM and manipulate it.

The Alpine.js answer to jQuery core is a declarative way to bind data to the DOM using the x-bind attribute binding directive. It can be used to bind any attribute to reactive data on the Alpine.js component. Alpine.js, like its declarative view library contemporaries (React, Vue), provides x-ref as an escape hatch to directly access DOM elements from JavaScript component code when binding is not sufficient (eg. when integrating a third-party library that needs to be passed a DOM Node).

Handling events

jQuery also provides a way to handle, create and trigger events. Alpine.js provides the x-on directive and the $event magic value which allows JavaScript functions to handle events. To trigger (custom) events, Alpine.js provides the $dispatch magic property which is a thin wrapper over the browser’s Event and Dispatch Event APIs.


One of jQuery’s key features is its effects, or rather, it’s ability to write easy animations. Where we might use slideUp, slideDown, fadeIn, fadeOut properties in jQuery to create effects, Alpine.js provides a set of x-transition directives, which add and remove classes throughout the element’s transition. That’s largely inspired by the Vue Transition API.

Also, jQuery’s Ajax client has no prescriptive solution in Alpine.js, thanks to the Fetch API or taking advantage of a third party HTTP library (e.g. axios, ky, superagent).


It’s also worth calling out jQuery plugins. There is no comparison to that (yet) in the Alpine.js ecosystem. Sharing Alpine.js components is relatively simple, usually requiring a simple case of copy and paste. The JavaScript in Alpine.js components are “just functions” and tend not to access Alpine.js itself, making them relatively straightforward to share by including them on different pages with a script tag. Any magic properties are added when Alpine initializes or is passed into bindings, like $event in x-on bindings.

There are currently no examples of Alpine.js extensions, although there are a few issues and pull requests to add “core” events that hook into Alpine.js from other libraries. There are also discussions happening about the ability to add custom directives. The stance from Alpine.js creator Caleb Porzio, seems to be basing API decisions on the Vue APIs, so I would expect that any future extension point would be inspired on what Vue.js provides.


Alpine.js is lighter weight than jQuery, coming in at 21.9kB minified — 7.1kB gzipped — compared to jQuery at 87.6kB minified — 30.4kB minified and gzipped. Only 23% the size!

Most of that is likely due to the way Alpine.js focuses on providing a declarative API for the DOM (e.g. attribute binding, event listeners and transitions).

Bundlephobia breaks down the two

For the sake of comparison, Vue comes in at 63.5kB minified (22.8kB gzipped). How can Alpine.js come in lighter despite it’s API being equivalent Vue? Alpine.js does not implement a Virtual DOM. Instead, it directly mutates the DOM while exposing the same declarative API as Vue.

Let’s look at an example

Alpine is compact because since application code is declarative in nature, and is declared via templates. For example, here’s a Pokemon search page using Alpine.js:

This example shows how a component is set up using x-data and a function that returns the initial component data, methods, and x-init to run that function on load.

Bindings and event listeners in Alpine.js with a syntax that’s strikingly similar to Vue templates.

  • Alpine: x-bind:attribute="express" and x-on:eventName="expression", shorthand is :attribute="expression" and @eventName="expression" respectively
  • Vue: v-bind:attribute="express" and v-on:eventName="expression", shorthand is :attribute="expression" and @eventName="expression" respectively

Rendering lists is achieved with x-for on a template element and conditional rendering with x-if on a template element.

Notice that Alpine.js doesn’t provide a full templating language, so there’s no interpolation syntax (e.g. {{ myValue }} in Vue.js, Handlebars and AngularJS). Instead, binding dynamic content is done with the x-text and x-html directives (which map directly to underlying calls to Node.innerText and Node.innerHTML).

An equivalent example using jQuery is an exercise you’re welcome to take on, but the classic style includes several steps:

  • Imperatively bind to the button click using $('button').click(/* callback */).
  • Within this “click callback” get the input value from the DOM, then use it to call the API.
  • Once the call has completed, the DOM is updated with new nodes generated from the API response.

If you’re interested in a side by side comparison of the same code in jQuery and Alpine.js, Alex Justesen created the same character counter in jQuery and in Alpine.js.

Back in vogue: HTML-centric tools

Alpine.js takes inspiration from TailwindCSS. The Alpine.js introduction on the repository is as “Tailwind for JavaScript.”

Why is that important?

One of Tailwind’s selling points is that it “provides low-level utility classes that let you build completely custom designs without ever leaving your HTML.” That’s exactly what Alpine does. It works inside HTML so there is no need to work inside of JavaScript templates the way we would in Vue or React  Many of the Alpine examples cited in the community don’t even use script tags at all!

Let’s look at one more example to drive the difference home. Here’s is an accessible navigation menu in Alpine.js that uses no script tags whatsoever.

This example leverages aria-labelledby and aria-controls outside of Alpine.js (with id references). Alpine.js makes sure the “toggle” element (which is a button), has an aria-expanded attribute that’s true when the navigation is expanded, and false when it’s collapsed. This aria-expanded binding is also applied to the menu itself and we show/hide the list of links in it by binding to hidden.

Being markup-centric means that Alpine.js and TailwindCSS examples are easy to share. All it takes is a copy-paste into HTML that is also running Alpine.js/TailwindCSS. No crazy directories full of templates that compile and render into HTML!

Since HTML is a fundamental building block of the web, it means that Alpine.js is ideal for augmenting server-rendered (Laravel, Rails, Django) or static sites (Hugo, Hexo, Jekyll). Integrating data with this sort of tooling can be a simple as outputting some JSON into the x-data="{}" binding. The affordance of passing some JSON from your backend/static site template straight into the Alpine.js component avoids building “yet another API endpoint” that simply serves a snippet of data required by a JavaScript widget.

Client-side without the build step

Alpine.js is designed to be used as a direct script include from a public CDN. Its developer experience is tailored for that. That’s why it makes for a great jQuery comparison and replacement: it’s dropped in and eliminates a build step.

While it’s not traditionally used this way, the bundled version of Vue can be linked up directly. Sarah Drasner has an excellent write-up showing examples of jQuery substituted with Vue. However, if you use Vue without a build step, you’re actively opting out of:

  • the Vue CLI
  • single file components
  • smaller/more optimized bundles
  • a strict CSP (Content Security Policy) since Vue inline templates evaluate expressions client-side

So, yes, while Vue boasts a buildless implementation, its developer experience is really depedent on the Vue CLI. That could be said about Create React App for React, and the Angular CLI. Going build-less strips those frameworks of their best qualities.

There you have it! Alpine.js is a modern, CDN-first  library that brings declarative rendering for a small payload — all without the build step and templates that other frameworks require. The result is an HTML-centric approach that not only resembles a modern-day jQuery but is a great substitute for it as well.

If you’re looking for a jQuery replacement that’s not going to force you into a SPAs architecture, then give Alpine.js a go! Interested? You can find out more on Alpine.js Weekly, a free weekly roundup of Alpine.js news and articles.

Source link

React import

A Different Approach to Coding With React Hooks

React Hooks, introduced in React 16.8, present us with a fundamentally new approach to coding. Some may think of them as a replacement for lifecycles or classes, however, that would be wrong. Like trying to translate a word from another language, sometimes you’re facing a completely new entity, which seems identical on the surface but is very different semantically and can’t be treated as equivalent. 

React not only changed the approach from OOP to Functional. The method of rendering has changed in principle. React is now fully built on functions instead of classes. And this has to be understood on a conceptual level. 

In fact, to fully grasp the idea, you’ll have to change your whole thinking process when coding in React. Hooks call for a more declarative approach, meaning that it is strictly mathematically logical, in contrast to imperative programming that is more lax in this sense. 

Before, when working with classes, you would have a state object and its change would trigger the re-render. This was a global object within the component (class) with every instance having a link to this state. It’s important to understand that this object is a singleton and it will be the same in each render. 

This can be clearly seen in the example. 

React import

First, we use a class to define a component with the starting value of zero. We create a button component that triggers a render by onClick, changing  setState to value + 1. 

In practice, this means that if you open the page and quickly click the button a few times, you’ll see that in five seconds a number appears in the console equivalent to the number of times you clicked the button. This shows us that all the renders refer to a single object. 

If you compare this to the functional approach with Hooks, it’s very similar to functions in JS. Hooks produce a new environment on each render, presenting you with a completely new set of data. 

React takes into account all changes in state Hook and calls on the function-component one more time, thus all the variables, functions (including those received from Hooks) will be in closure and they will be different for each render. This is also clearly seen in the example.

import react from react

We are using two Hooks:useState and useEffect, to implement similar functionality. With the default state set to zero, each onClick of the button triggers a change of state to state + 1.

However, in this example, if you click the button a few times while the render is in progress, in five seconds you’ll see a zero. This means that the meaning of the state is in closure and on each render you’ll get a different state as well asuseEffect. 

This is important to understand and use in your code. If you’re still not sure if you need to use Hooks when coding in React, go back to the drawing board, learn what React Hooks really constitute, what are the benefits, and at least give it a try. 

Source link