Corvid Package Manager
Strategy

Storeon: An Event-Based State Manager for Corvid


Motivation

In the article, “State management in Corvid,” Shahar Talmi brings up a question about controlling app states in Corvid. If you’re not familiar with Corvid, it’s a development platform running on Wix that allows you to quickly and easily develop web applications.

Accurately controlling the state of any app is a really big problem. If you have many component dependencies or need to handle constant user interactions, you’re going to suffer a bit when you want to eventually add a new feature or scale your application.

In this article, I share my solution — a very tiny library called Storeon (it’s only 175 bytes) that features an easy interface. So, I wrote a wrapper for integration with Corvid. As a result, we have the state manager corvid-storeon, and it’s less than 90 lines of code.

You may also like:
Angular Tutorial: State Management With NgRx.

How it Works

We will create a traditional study app with counters. I will use two counters to help provide a better demonstration.

At first, we need to install the library from Package Manager

Corvid Package Manager

Corvid Package Manager

and create one more file for store initialization in the public folder.

 public

└── store.js  

We will write our business logic in public/store.js .

Storeon’s state is always an object; it canʼt be anything else. Itʼs a small limitation and not too important to us, but we have to remember it.

public/store.js

So, we created a store in the public folder and exported from there with four methods. In the second part, we will create our UI, and we will write logic to change the state. 

Letʼs add two text elements to display our counter value, and four buttons for event increments/decrements.

Creating two counter components

Creating two counter components

Of course, we have to import the store methods from the public file to the page’s code.

import { dispatch, connect, connectPage } from 'public/store'; 

With connect("key", callback), we can subscribe to any store properties, and the callback function will be run when the page is loaded and each time when the listed property changes.

The connectPage(callback) is a wrapper around the $w.onReady(callback). With dispatch(event, [data]), we will emit events.

Page Code

Demo

Modules

The function, createStore(modules), accepts a list of modules. We can create different functions to split business logic into our app. Letʼs see a few examples:

Synchronization the App state with the wix-storage memory API:

Tracking an event to external analytics tools with wixWindow.trackEvent():

Combining modules:

Conclusion

As you can see, we were able to quickly implement our state management solution with a minimal amount of code. Of course, due to data binding in Corvid, you normally don’t have to worry about state management. However, in more complex applications, the issue can become more difficult, and state management will become more challenging to handle.

State management can be a tricky problem, but Storeon offers a simple, yet robust solution. In addition, Corvid allows us to quickly implement this in our application, all while focusing on code and not having to spend time dealing with other issues. 

Resources 

Demo 

Further Reading

This article originally appeared on Medium.



Source link

Post image
Strategy

Need help with sitemap after IIS rule redirect


Hey all,

I recently had the dev team do an IIS rule to help with some duplicate content – all www are now non-www, the trailing / at the end of URLs is now gone, and we forced https.

Since the update, I thought my sitemap would update to the new structure but it hasn’t. Now SEMRush and ScreamingFrog are telling me I have all these sitemap errors because all the URLs are still http and have the slash at the end. So SEMRush is looking at these pages and seeing they’re redirected and throwing errors.

In the sitemap master template in Umbraco, I see a string that handles the URL formats so my question is: If I change the highlighted parts of the image below to HTTPS and remove the /, will that fix my problem?

Post image



Source link

Strategy

Smaller HTML Payloads with Service Workers — Philip Walton


Many developers know that you can use service workers to cache web pages (and their sub-resources) in order to serve those pages to users when they’re offline.

And while this is true, it’s far from the only thing that service workers can do to improve the performance and reliability of a website. A lesser known capability of service workers is that you can programmatically generate your responses—you aren’t limited to just fetching from the network or reading from the cache.

In a traditional client-server setup, the server always needs to send a full HTML page to the client for every request (otherwise the response would be invalid). But when you think about it, that’s pretty wasteful. Most sites on the internet have a lot of repetition in their HTML payloads because their pages share a lot of common elements (e.g. the <head>, navigation bars, banners, sidebars, footers etc.). But in an ideal world, you wouldn’t have to send so much of the same HTML, over and over again, with every single page request.

With service workers, there’s a solution to this problem. A service worker can request just the bare minimum of data it needs from the server (e.g. an HTML content partial, a Markdown file, JSON data, etc.), and then it can programmatically transform that data into a full HTML document.

On this site, after a user visits once and the service worker is installed, that user will never request a full HTML page again. Instead the service worker will intercept requests for pages and just request the contents of those pages—everything inside the <main> element—and then the service worker will combine that content with the rest of the HTML, which is already in the cache.

By only requesting the contents of a page, the networks payloads become substantially smaller, and the pages can load quite a bit faster. For example, on this site over the past 30 days, page loads from a service worker had a 47.6% smaller network payloads, and a median First Contentful Paint (FCP) that was 52.3% faster than page loads without a service worker (416ms vs. 851ms). In the graph below, you can clearly see the entire distribution shifted to the left:

First Contentful Paint (FCP) distribution by service worker status

How it works

Anyone who’s ever built a Single Page Application (SPA) is probably familiar with the basics of how this technique works. SPAs will typically only fetch the content portion of a new page and then swap that out with the content of the existing page—preventing the browser from having to make a full navigation.

Service workers can take this technique to the next level, though, since (once installed) they work for all page loads, not just in-page links. They can also leverage streaming APIs to deliver content even faster and let the browser start rendering even earlier—something SPAs can’t currently do (at least not without hacks).

When a user with a service worker installed visits any of my pages, the final HTML document the browser renders is actually a concatenation of three different page partials:

  • /shell-start.html
  • /<page-slug>/index.content.html
  • /shell-end.html

And only one of those partials (the content) is sent over the network.

The following sections outline exactly how I’ve implemented this strategy on this site.

1) Create both a full and a content-only version of each page

In order to serve either a full HTML version of a page (for first-time visitors) or just a content partial (for repeat visitors with a service worker installed), you’ll need to either:

  • For dynamic sites: configure your server to conditionally render different templates based on the request.
  • For static sites: build two versions of each page.

Since this site is a static site, I do the latter. It might sound like a lot of extra work, but if you’re using a template system to build your pages, you’ve probably already extracted the common parts of your layout into partials. So the only thing left to do is create a content-only template and update your build process to render each page twice.

On this site I have a content partial template and then also a full page template that includes the content partial template in its <main> element.

Here’s an example of both rendered versions of my “About” page (note the view-source: URL prefix):

  • view-source:https://philipwalton.com/about/index.html
  • view-source:https://philipwalton.com/about/index.content.html

2) Create separate partials for the page shell

In order for the service worker to insert the page partial sent from your server into a full HTML page response that can be rendered in your browser window, it has to know what the surrounding HTML is for the full page.

The easiest way to make that work is to build and deploy this HTML as two separate files:

  • Everything that comes before the opening <main> tag (including everything in the <head>).
  • Everything after the closing </main> tag.

On my site, I call these files shell-start.html and shell-end.html, and you can see their contents for yourself here:

  • view-source:https://philipwalton.com/shell-start.html
  • view-source:https://philipwalton.com/shell-end.html

I never request these files from the main page, but I do precache them in the service worker at install time, which I’ll explain next.

3) Store the shell partials in the cache

When a user first visits my site and the service worker installs, as part of the install event I fetch the contents of shell-start.html and shell-end.html, and put them in the cache storage.

I use Workbox (specifically the workbox-precaching package) to do this, which makes it easy to handle asset versioning and cache invalidation whenever I update either of these partials.

import {precache} from 'workbox-precaching';

precache([
  {url: '/shell-start.html', revision: SHELL_START_REV},
  {url: '/shell-end.html', revision: SHELL_END_REV},
  
]);

In the above code, the revision property of each precached URL is generated at build time using the rev-hash package and inserted in the service worker script via Rollup (rollup-plugin-replace).

Alternatively, if you don’t want to generate the revisions yourself, you can use the workbox-webpack-plugin, workbox-build, or workbox-cli packages to generate them for you. When doing that, your code would just look like this (and in your configuration you’d tell Workbox what files you want to revision, and it’ll generate the precache manifest for you, replacing the self.__WB_MANIFEST variable in your output file):

import {precache} from 'workbox-precaching';

precache(self.__WB_MANIFEST);

4) Configure your service worker to combine the content and shell partials

Once you’ve put the shell partials in the cache, the next step is to configure navigation requests to construct their responses by combining the shell partials from the cache with the content partial from the network.

A naive way to do this would be to get the text of each response and concatenate them together to form a new response:

import {getCacheKeyForURL} from 'workbox-precaching';

function getText(responsePromise) {
  return responsePromise.then((response) => response.text());
}

addEventListener('fetch', (event) => {
  if (event.request.mode === 'navigate') {
    event.respondWith(async function() {
      const textPartials = await Promise.all([
        getText(caches.match(getCacheKeyForURL('/shell-start.html'))),
        getText(fetch(event.request.url + 'index.content.html')),
        getText(caches.match(getCacheKeyForURL('/shell-end.html'))),
      ]);

      return new Response(textPartials.join(''), {
        headers: {'content-type': 'text/html'},
      });
    }());
  }
});

I said above that this is the naive way to do it, not because it won’t work, but because it requires you to wait for all three responses to fully complete before you can even begin to deliver any of the response to the page.

All modern browsers and servers support sending and receiving HTML as a stream of content, and service workers are no different. So instead of waiting until you have the full text of each response and then creating a new response from that full string, you can create a ReadableStream and start responding as soon as you have the very first bit of content. And since the shell-start.html file will be coming from the cache, you can generally start responding right away—you don’t need to wait for the network request to finish!

If you’ve never heard of Readable Streams before, don’t worry. When using Workbox (which I recommend) you don’t have to deal with them directly. The workbox-streams package has a utility method for creating a streaming response by combining other runtime caching strategies.

import {cacheNames} from 'workbox-core';
import {getCacheKeyForURL} from 'workbox-precaching';
import {registerRoute} from 'workbox-routing';
import {CacheFirst, StaleWhileRevalidate} from 'workbox-strategies';
import {strategy as composeStrategies} from 'workbox-streams';

const shellStrategy = new CacheFirst({cacheName: cacheNames.precache});
const contentStrategy = new StaleWhileRevalidate({cacheName: 'content'});

const navigationHandler = composeStrategies([
  () => shellStrategy.handle({
    request: new Request(getCacheKeyForURL('/shell-start.html')),
  }),
  ({url}) => contentStrategy.handle({
    request: new Request(url.pathname + 'index.content.html'),
  }),
  () => shellStrategy.handle({
    request: new Request(getCacheKeyForURL('/shell-end.html')),
  }),
]);

registerRoute(({request}) => request.mode === 'navigate', navigationHandler);

In the above code I’m using a cache-first strategy for the shell partials, and then a stale-while-revalidate strategy for the content partials. This means users who revisit a page they already have cached might see stale content, but it also means that content will load instantly.

If you prefer to always get fresh content from the network, you can use a network-first strategy instead.

5) Set the correct title

Observant readers might have noticed that, if you serve the same cached shell content for all pages, you’ll end up having the same <title> tag for every page, as well as any <link> or <meta> tags that had previously been page-specific.

The best way to deal with this is for your page partials to include a script tag at the end that sets the title (and any other page-specific data) at runtime. For example, my page partial template uses something like this:

<script>document.title = '{{ page.title }}'</script>

Note that this is not a problem for search crawlers or other services that render page preview cards. These tools do not run your service worker, which means they’ll always get the full HTML page when making a request.

It’s also not a problem for users who have JavaScript disabled because, again, those users would not be running your service worker either.

Performance gains (in detail)

The histogram I showed at the beginning of the article should give you a sense for how using this technique vastly improves FCP for all users. Here’s a closer look at the specific FCP values at some key percentiles:

First Contentful Paint (in milliseconds)
Percentile Service Worker No Service Worker
50th 416 851
75th 701 1264
90th 1181 1965
95th 1797 2632

As you can see, FCP is faster when using a service worker across all key percentiles.

However, since visitors with a service worker installed are always returning visitors, and visitors without service worker installed are likely first time visitors, you might be skeptical as to whether the performance improvements I’m seeing are actually from this technique, or whether they’re from things like resource caching in general.

While resource caching may improve FCP for some sites, it actually doesn’t for mine. I inline both my CSS and SVG content in the <head> of my pages, which means FCP is never blocked on anything other than the page response, and that means the FCP gains seen here are entirely due to how I’m generating the response in the service worker.

The primary reason service worker loads are faster on this site is because users with a service worker installed already have the shell-start.html partial in their cache. And since the service worker is responding with a stream, the browser can start rendering the shell almost immediately—and it can fetch the page’s content from the server in parallel.

But that brings up another interesting question: does this technique improve the speed of the entire response, or just the first part of it?

Again, to answer that question let me show you some timing data for the entire response.

And note that since I use a stale-while-revalidate caching strategy for my content partials, sometimes a user will already have a page’s content partial in the cache (e.g. if they’re returning to an article they’ve already read) and sometimes they won’t (e.g. they previously visited my site, and now they’ve come back to read a new article).

Here’s the response timing data segmented by both service worker status as well as content partial cache status:

Response Complete Time (in milliseconds)
Percentile Service Worker
(content cached)
Service Worker
(content not cached)
No Service Worker
50th 92 365 480
75th 218 634 866
90th 520 1017 1497
95th 887 1284 2213

Comparing the performance results between the “No Service Worker” case and the “Service Worker (content not cached)” case is particularly interesting because in both cases the browser has to fetch something from the server over the network. But as you can see from these results, fetching just the content part of the HTML (rather than the entire page in the “no service worker” case) is around 20%-30% faster for most users—and that even includes the overhead of starting up the service worker thread if it’s not running!

And if you look at the performance results for visitors who already had the content partial in their cache, you can see the responses are near instant for the majority of users!

Key takeaways

This article has shown how you can use service workers to significantly reduce the amount of data your users need to request from your server, and as a result you can dramatically improve both the render and load times for your pages.

To end, I want to emphasize a couple of key pieces of performance advice from this article that I hope will stick with you:

  • When using a service worker, you have a lot more flexibility on how you can get data from your server. Use this flexibility to reduce data usage and improve performance!
  • Never cache full HTML pages. Break up your pages into common chunks that can be cached separately. Caching granular chunks means things are less likely to get invalidated when you make changes.
  • Avoid blocking first paint behind any resource requests (e.g. a stylesheet). For the initial visit you can inline your stylesheet in the <head>, and for returning visits (once the service worker has installed) all resources required for first paint should be in the cache.

Additional resources



Source link

Post image
Strategy

Project to take a center aligned app and lay it out like JIR…


Hi Gang,

I am currently undertaking a big sprint whose goal to take a Bootstrap-based single page app (SPA) written mostly in .NET The key is that it’s all based on Bootstrap so I can use templates and styles within the Bootstrap framework.

The goal is to redo the layout, which is currently a pretty standard centered layout. The nav is currently in typical navbar with dropdowns, and needs to be broken out, and displayed instead in the left-hand portion of the screen. The business wants the full width of the screen to be occupied by the content, with the left-hand portion of the screen containing the navigation.

Is there a best-practice sort of document that walks through this process or describes the best way to handle it? I’m just looking for a little bit of direction as I approach this. I’m a front-end guy and haven’t needed to totally re-organize an app quite this way so I am looking for a few pointers if anyone has a moment. Thank you in advance for any assistance or advice. Thank you for reading this.

Post image
Post image



Source link

Leaving page alert
Strategy

Prevent Users From Losing Unsaved Data


There are many instances where a user fills some input in a form, edits that input, and then might attempt to leave the page that they’re on. However, often, we’ll want to secure the form in such a way that if someone navigates away or closes the browser tab, they should be prompted to confirm that they really want to leave the form with unsaved data.

Whenever these kinds of instances occur, you will see an alert appear on the top of your browser like this:

Leaving page alert

Leaving page alert
Here, we have two different ways of implementing these functionalities:

  • When a user closes or refreshes the tab.
  • When navigation is changed (user clicks back or forward navigation page buttons).

So, let’s go ahead with the first implementation:

When a User Closes or Refreshes the Tab

We need to register the window:beforeunload and show a message if the user has unsaved data. The beforeunload event is fired when the window, the document, and its resources are about to be unloaded. The document is still visible, and the event is still cancelable at this point.

This event enables a web page to trigger a confirmation dialog asking the user if they really want to leave the page. If the user confirms, the browser navigates to the new page; otherwise, it cancels the navigation.

For example:

2: When the User Navigates to Another Route (Navigation Changed Event):

For the implementation of navigation changed events, you need to initially create a canDeactivate interface. It is an interface that a class can implement to be a guard, deciding if a route can be deactivated. If all guards return true, navigation will continue. If any guard returns false, navigation will be canceled. If any guard returns a UrlTree, current navigation will be canceled and new navigation will be kicked off to the UrlTree returned from the guard.

Initially, we need to write the import statement for
canDeactivate
, which comes from 
'@angular/router'.

Then, we create an interface for the 
HasUnsavedData method and import it as well in a guard file/component.

Add the path to the route config file.

Add 
CanDeactivate to the
ngModule providers. Now, we write the main method to handle this event and showing the popups for displaying the message for unsaved data in the guard file.

We will have to implement the method, CanDeactivate, which gets the component instance and returns true or false. In this example, true will be returned if the message popped up and the user confirmed. Otherwise, the route won’t be changed.

Conclusion

To sum up all the blog, as a best practice we should create a generic component to handle any form components. This abstract component will be registered to the browser event and the angular guard will invoke its API: CanDeactivate. It will alert the user that data is unsaved when the form is actually left, not submitted, and dirty.



Source link

A resizable textarea element
Strategy

CSS resize none on textarea is bad for UX


For whatever reason, people seem to be passionate about removing the textarea resize handle using the CSS resize: none declaration. Also, GitHub says there are more than 3 million code results in the wild for textarea with CSS resize:none applied.

A resizable textarea element

I’m on Stack Overflow and feel kind of embarrassed about building reputation on recommending other people in the past to use CSS resize: none on textareas. I’m not a power user but back in 2011, I did post an answer on Stack Overflow on removing the bottom-right corner dots in a textarea. Also, the thing is that I still keep getting Stack Overflow reputation on that answer.

Stack Overflow reputation on CSS resize none

Never start an answer with just and never recommend other people to use CSS resize: none in their stylesheets. You can do better than me!

CSS resize:none on textarea is bad UX

I think using the CSS resize:none declaration on a textarea is a bad decision when it comes to the user experience (UX) overall.

Very often, the textarea is limited to a number of rows and columns or it has fixed width and height defined via CSS. Based solely on my own experience, while answering to forums, writing contact forms on websites, filling live chat popups or even private messaging on Twitter this is very frustrating.

Sometimes you need to type a long reply that consists of many paragraphs and wrapping that text within a tiny textarea box makes it hard to understand and to follow as you type. There were many times when I had to write that text within Notepad++ for example and then just paste the whole reply in that small textarea. I admit I also opened the DevTools to override the resize: none declaration but that’s not really a productive way to do things.

The CSS resize

According to MDN, the resize CSS property sets whether an element is resizable, and if so, in which directions. Also, it’s important to keep in mind that the resize property does not apply to the inline elements and block elements for which the overflow property is set to visible.

The CSS resize property is ofter applied to textarea in order to disable its resizability and this is what this article is about. I felt like an inner contradiction considering the amount of reputation I keep getting on my above Stack Overflow answer while finding on my own this bad UX. Besides that, it looks like the number of GitHub code results on this matter is growing, from 2 millions in 2017 as found by @humphd to more than 3 millions two years later.

Auto height textareas

A common scenario is to have an auto-height textarea element which basically expands as you type new rows. On this matter, Chris Ferdinandi wrote a good article on how to expand a textarea as the user types.

But besides the above, I’ve seen lots of JS hacks that involve using the CSS resize: none declaration. There are alternatives to simulate the ‘textarea’ behavior and a popular one is using the classic div with the boolean contentEditable attribute value set to true.

  <div contentEditable="true"></div>

Here’s a more detailed and hopefully accessible example using ARIA roles on Twitter’s mobile version:

DevTools ARIA roles on mobile Twitter

Fancy live chats a.k.a resize: none everywhere

Because it’s a fancy new live chat widget and it’s a really high competition out there, everyone wants the most visually pleasing, catchy and cool box where to send a message from.

While most live chat apps use the classic HTML textarea element, the implementations mostly rely on having listeners and adjust the CSS height style based on the text contained within the box, with resize: none declaration remaining, unfortunately, a constant presence in the CSS.

Help Scout uses CSS resize none for the chat widget textarea

So, why resize: none is so popular in this case?

To answer myself here, maybe if I’d have to write code for a popular live chat app, I wouldn’t want a textarea resize handle to ruin my beautiful component design freshly imported from Figma. Would I?

I guess I’d stick with resize: vertical at the least, instead of ruining everything with resize: none. Šime Vidas also tweeted that resize: vertical is robust enough, and it’s cross browser.

Conclusion

You must really hate your users if textarea {resize: none} is in your stylesheets. CSS resize none is bad for UX and you already know it:





Source link

Free Website Builder + Free CRM + Free Live Chat = Bitrix24
Strategy

Free Website Builder + Free CRM + Free Live Chat = Bitrix24


(This is a sponsored post.)

You may know Bitrix24 as the world’s most popular free CRM and sales management system, used by over 6 million businesses. But the free website builder available inside Bitrix24 is worthy of your attention, too.

Why do I need another free website/landing page builder?

There are many ways to create free websites — Wix, Squarepage, WordPress, etc. And if you need a blog — Medium, Tumblr and others are at your disposal. Bitrix24 is geared toward businesses that need websites to generate leads, sell online, issue invoices or accept payments. And there’s a world of difference between regular website builders and the ones that are designed with specific business needs in mind.

What does a good business website builder do? First, it creates websites that engage visitors so that they start interacting. This is done with the help of tools like website live chat, contact form or a call back request widget. Second, it comes with a landing page designer, because business websites are all about conversion rates, and increasing conversion rates requires endless tweaking and repeated testing. Third, integration between a website and a CRM system is crucial. It’s difficult to attract traffic to websites and advertising expensive. So, it makes sense that every prospect from the website as logged into CRM automatically and that you sell your goods and services to clients not only once but on a regular basis. This is why Bitrix24 comes with email and SMS marketing and advertising ROI calculator.

Another critical requirement for many business websites is ability to accept payments online and function as an ecommerce store, with order processing and inventory management. Bitrix24 does that too. Importantly, unlike other ecommerce platforms, Bitrix24 doesn’t charge any transaction fees or come with sales volume limits.

What else does Bitrix24 offer free of charge?

The only practical limit of the free plan is 12 users inside the account. You can use your own domain free of charge, the bandwidth is free and unlimited and there’s only a technical limit on the number of free pages allowed (around 100) in order to prevent misusing Bitrix24 for SEO-spam pages. In addition to offering free cloud service, Bitrix24 has on-premise editions with open source code access that can be purchased. This means that you can migrate your cloud Bitrix24 account to your own server at any moment, if necessary.

To register your free Bitrix24 account, simply click here. And if you have a public Facebook or Twitter profile and share this post, you’ll be automatically entered into a contest, in which the winner gets a 24-month subscription for the Bitrix24 Professional plan ($3,336 value).

Direct Link to ArticlePermalink

The post Free Website Builder + Free CRM + Free Live Chat = Bitrix24 appeared first on CSS-Tricks.



Source link

Long-form content: Any reasoning for/against presenting it two-column?
Strategy

Long-form content: Any reasoning for/against presenting it t…


Long-form content: Any reasoning for/against presenting it two-column?

Interested to hear people's thoughts on whether two-columning heavy text pages is a good move or not. Possible cons that spring to mind initially are that the user has to scroll down, then back up to continue reading the next column + overwhelming amount of text on the screen at one time.

(Just in you're wondering why the picture isn't loaded, it's deliberately blurred.)

https://preview.redd.it/89ludz000wd41.png?width=1834&format=png&auto=webp&s=00a28caf3394301916d2089f89b18a9af0434c66

submitted by /u/FollowTheCart
[comments]



Source link

Post image
Strategy

Any way I can flatten the layers(w/ different blend modes) f…


Post image

So I was able to make a flame using different brushes, transparent background flame image with multiple layers in different blend modes but I’d need to flatten the image and keep the transparent background so that I can have it digital printed on a shirt.

Is that possible? I don’t think this can be printed with multiple layers on a shirt. I don’t know much about it but I wanted to flatten/merge the layers in one. There’s no way I can just save the image as jpeg as I’d lose the transparent background.

I tried to convert the layers/group to Smart Object but doesn’t work.

Appreciate any advice. Thank you!



Source link

Post image
Strategy

Jr. Dev question – Custom Drop Down


Hi everyone,I hope this is the right place to post this question, If not I apologize!I am a junior front end developer and one of my clients wants their drop down to look like the screenshot attached. We are currently using wordpress and Search and Filter Pro for the plugin to filter the different types of Logos on their site ( which are all posts ).My question to you guys is this – How hard will it be to implement something like this? To convert a standard drop down select to what they are asking for in the screenshot?Any and all help is greatly appreciated, Thank you!

Post image



Source link