Example of the user flows overview

How To Hire React Native Developer


The hiring process of React Native developers involves different parts, each of them requiring essential time and effort. To make the process less stressful, we suggest you follow these five steps:

  1. Prepare the specifications.
  2. Outline the job requirements.
  3. Create a shortlist of candidates.
  4. Arrange the job interviews.
  5. Qualify the candidates.

In this post, we will uncover the first three parts of the hiring process:

  1. How to create the specification for a React Native project.
  2. How to complete the job overview.
  3. How to shortlist the candidates.

By the end of the reading, you will get prepared to reach out to potential candidates. In the second part of the article, we will observe the job interview process and qualifying the candidates.

1. How To Create the Specification for a React Native Project

Finding the best candidate requires good preparation. Your first goal will be to outline the project requirements. Be as specific as you can. Detailed documentation draws the attention of professional developers and encourages them to apply.

What is needed to complete the project specifications:

  • Explain the user flows.
  • Create graphic layouts.
  • Outline technical requirements.

Explain the User Flows

User flows help to convey the core idea of the app. Based on that, the potential developer will evaluate the scope and define the biggest challenges in upcoming work.

After finishing this part, you will get the confidence to explain your product specifics to potential contractors. Except for that, you will check if your product app is ready for the market. These questions will help you to revise your current progress:

  • Is your application flow simple and easy to use?
  • Does your app help users achieve their primary goals?
  • What are the most priority features in your app?
  • How do you encourage users to purchase the extra services?
  • What advantages do you offer to your users?

Writing the user flows is a crucial part of the product build. Here is the list of suggested questions that will help you develop the user flow overview:

  • What are user types (customers, suppliers, supervisors) presented in my application?
  • What abilities does each user category have?
  • What information will users request after their registration?
  • Do I need users to confirm their registration via email or SMS?
  • How could users view the other user profile?
  • How can users communicate with other users?
  • What kind of third-party applications would I like to integrate with my app?
  • How will the users complete the purchase process in my app?
  • How do I want to track the information about new and returned users?
  • Do I want to view the billing information (invoices, payments)?
  • What types of notifications do I want to display to users?
Example of the user flows overview
Example of the user flows overview

Create Graphic Layouts

Now that you have the end-to-end descriptions of the user flows, your next turn will be to translate them to the graphic UI design.

Benefits of the graphic UI design:

  • It helps reflect the graphic animations and business logic.
  • It makes it possible to find out the challenges and risks.
  • It enables making a realistic timeline and cost evaluation.

Most popular tools for the graphic UI design:

In general, mobile applications contain a different number of screens. There could be 20+ screens if it comes to developing a small social media app or 200+ screens if we talk about business-class mobile applications.

So, depending on your case and engagement, you could choose your way: master any of the UI design tools, create the screens by yourself or outsource this work to the UI/UX designer.

Sample screens of ridesharing app
Sample screens of ridesharing app

Outline Technical Requirements

React Native developers are creating the mobile UI. Their responsibility is to wire it up with the back-end. To handle this integration, developers need to obtain the documentation on the API first. It should involve the following parts:

  • Resource descriptions.
  • Endpoints and methods.
  • Parameters.
  • Request samples.
  • Response examples.

If you are a non-technical specialist, you could ask for the help of a consultant specializing in developing the API specification.

There are a few ways to find an eligible consultant:

Consultants usually possess any of these positions:

  • Business analysts at a software development agency (most of the agencies produce the end-to-end documentation for the new projects).
  • Software engineers with the back-end background (there could be either the back-end or full-stack engineers).
  • Technical writers who specialize in back-end documentation.

The specification on the back-end will help your team keep organized and deliver the project without unexpected distractions.

2. How To Complete the Job Overview

We have reached the middle point of our road. From now, every new step will be easier to take.

Your current task is to create the job overview. Generally, it includes the technical requirements along with preferences regarding the skills of the candidate.

You could use the following template of the job overview for React Native developers.

Job Requirements Template

Project overview:

  • Tell about your company and the core idea of the project.
  • Provide a brief summary of the user roles.

Main technical requirements:

  • Mention the technological stack of your project.
  • Preferences on UI design (graphic rendering, specific features).
  • Third-party integrations, like map and geo-location services, mail and SMS integrations, payment gateways.
  • Location.
  • Language.


  • Project duration.
  • Work engagement (full-time or part-time).
  • Time zone preference.
  • Working environment (management system and approaches).
  • Budget expectations.

Skills requirements:

  • Development experience.
  • Special technical skills.
  • Projects on GitHub.
  • Portfolio works.
  • Education.

References to the materials:

  • User flows overview.
  • Graphic UI design.
  • Back-end specification.

Sample of the Job Overview

Let’s bring more practice to our process. At present, many companies are moving forward with creating ride-sharing services. Imagine for a moment that you represent the management of the transportation company.

Our current task is to hire a React Native developer who will accomplish the front-end for the upcoming mobile service app. The first step will be to write down the announcement for a job post.

Project Overview:

The logistics company Rideshare Services Int. is looking to extend its customer service and hopes to create a cross-platforming mobile application. The app is aimed to help car drivers to make an additional income to minimize their transportation expenses.

User roles:

There are three main user categories within the app: driver, passenger, admin.

  • Drivers indicate their daily route on the map and set a schedule. Based on that, the mobile app should find the passenger looking to request the ride at a specified time.
  • Passengers can offer an individual price for the desired service. Drivers could take on the opportunity if they agree on the proposed cost.
  • The admin user should be able to view the general statistics and moderate users.

Technological Stack:

The mobile UI code needs to be created with React Native. The app developer is free of choice regarding the state management service — Redux or Mobx will work fine.

The specification is ready and could be shared with the eligible candidate. The back-end is built with Node.js (Nest.js). The database is based on Postgresql. Preferred cloud service: Amazon Web Services.

Preferences on UI Design:

The screen designs are already completed with Figma. There are 90 screens ready for implementation. The requirements for the graphic animation are minimal since the project is at the early MVP stage.

Skills Requirements:

  • previous experience of utilizing React-hook-forms in the project is much preferred;
  • experience of work with Swift and Android Studio (to be able to work with native modules on iOS and Android platforms);
  • knowledge of TypeScript is must have;
  • experience of work with Google Map API;
  • experience with Twilio and Stripe will be a big advantage.

Location and Language:

Our project will be based only in the UK for the initial stage, so that we would require only one language there. In the future, we are hoping to extend it with Norwegian and Dutch versions.

We are working in the UK time zone, but we are flexible as for your time preferences. We hope to have 3–4 hours overlap within our time zones, preferably in the morning hours.

Duration of the project:

Based on our preliminary estimation, the project would take 3,5–4 months to finish. Since the deadline is tight, you need to work full-time (40 hours per week).

Working Environment:

  • We use GitHub for the code repository.
  • All communications are run through Slack and Zoom.
  • We manage the tasks with Jira.

Budget Expectations:

Our budget capacity varies within USD 30–35K (based on our assumptions regarding the timeline).

Required Skills:

  • Master’s or bachelor’s degree in computer sciences.
  • 3+ years of experience with React Native, both iOS and Android.
  • You have a good understanding of Agile and Scrum principles.

Project Materials:

Besides the graphic UI layouts and specifications on the back-end, we are willing to share the user flows overview. We will send you the documentation after reviewing your application form.


  • Please, send us your resume once you are interested in taking this job position. In your answer, describe your skills regarding the project requirements.
  • Share the links to the mobile apps you have done before and specify your team’s role while working on the project.


Now we are ready to publish the job overview. You will receive a significant number of replies after placing the job announcement on those social platforms:

Except for the mentioned, there are some global social websites, which provide job search opportunities to their community. So, you could also use their service to hire the React Native developers:

Each platform will make a difference in your outreach process. Upwork ensures you will get over 100 proposals on the first day. Just to prove our expectations, we simulated the hiring process on that platform.

We published the same job overview and requirements as we mentioned in our sample. Now you could check the outcomes of this experiment. The activity was so high that we obtained over 40 proposals in the first two hours.

Upwork Proposals
So, it makes sense to iterate your publications and post your requirements on the web resources one by one.

3. How to Shortlist the Candidates

As you could see, you will get dozens of applications straight after publishing your detailed job overview.

Now you need to pre-qualify the candidates based on the following criteria:

  • Did the applicant specify the works related to your industry area?
  • Did the applicant describe their skills and experience based on your mentioned requirements?
  • Could the applicant work full-time on your project?
  • Did the candidate describe their role in the previous projects?
  • Did the applicant provide the link to their profile at GitHub?
  • Do they have some recommendations from past clients or employers?

You could use the applicant screening rubric. The following template will help you come up with your criteria and make the right decision.

Applicant Screening Template

Applicant Screening Template

Create a list of the first 20 candidates matching your expectations. Respond to them via email or the job search platform, appreciating their time and interest. Propose them to book a meeting with you, sending them a link over Calendly or any other booking service that you normally use. Along with that, share your specifications with the candidates, so they could prepare for a job interview.

In our next post, we will provide you with further details on the hiring process of the React Native engineer. The essential part of it will be devoted to the questions you could ask your candidate during the job interview.

Thanks for reading, and good luck with your preparation!

Source link

Text about the demo on the left side of the page and two black Germaine-looking code blocks on the right side with a black background and green text.

Web Features That May Not Work As You’d Expect

As the web gets more and more capable, developers are able to make richer online experiences. There are times, however, where some new web capabilities may not work as you would expect in the interest of usability, security and privacy.

I have run into situations like this. Like lazy loading in HTML. It’s easy to drop that attribute onto an image element only to realize… it actually needs more than that to do its thing. We’ll get into that specific one in a moment as we look at a few other features that might not work exactly as you‘d expect.

This limitation has been around for a while, but it does show how browser features can be exploited. One possible exploit is an anchor gets some :visited link style in CSS and is positioned off screen. With the off-screen anchor, one could use JavaScript to change the anchor’s href value and see if a particular href causes the link to appear visited—reconstructing a user’s history in the process.

Known as the CSS History Leak, this was so pervasive at one time that the Federal Trade Commission, the United States’ consumer protection agency, had imposed harsh fines for exploiting it.

These days, attempting to use getComputedStyle on a :visited link returns the style of the :unvisited link instead. That’s just one of those things you have to know because that’s different from how it intuitively ought to work.

But we can get around this in two ways:

  1. make the visited link’s style trigger a side effect (e.g. a layout shift), or
  2. leverage the sibling (~ or +) or child (>) CSS selectors to render another style.

Regarding side effects, while there are some clever yet fragile ways to do this, the options we have for styling :visited links are limited and some styles (like background-color) will only work if they’re applied to unvisited links. As for using a sibling or child, executing getComputedStyle on these returns the style as if the link wasn’t visited to begin with.

Browsers don’t cache assets across sites anymore

One advantage of a CDN was that they allowed for a particular resource (like Google Fonts) to be cached in the browser for use across different websites. While this does provide a big performance win, it has grave privacy implications.

Given that an asset that’s already cached will take longer to load than one that’s not, a site could perform a timing attack to not only see your site history but also expose both who you are and your online activity. Jeff Kaufman gives an example:

Unfortunately, a shared cache enables a privacy leak. Summary of the simplest version:

  • I want to know if you’re a moderator on www.forum.example.
  • I know that only pages under load www.forum.example/moderators/header.css.
  • When you visit my page I load www.forum.example/moderators/header.css and see if it came from cache.

In light of this, browsers don’t offer this anymore.

performance.now() may be inaccurate

A scary group of vulnerabilities came out as couple of years ago, one of which was called Spectre. For an in depth explanation, see Google’s leaky.page (works best in Chromium) as a proof of concept. But for the purposes of this article, just know that the exploit relies on getting highly accurate timing, which is something that performance.now() provides, to try and map sensitive CPU data.

Text about the demo on the left side of the page and two black Germaine-looking code blocks on the right side with a black background and green text.
The demo at leaky.page

To mitigate Spectre, browsers have reduced its accuracy and may add noise as well. These range from 20μs to 1ms and can be changed based on various conditions like HTTP headers and browser settings.

Lazy loading with the loading attribute doesn’t work without JavaScript

Lazy loading is a technique where assets are only loaded in the browser when it scrolls into the viewport. Until recently, we could only implement this in JavaScript using IntersectionObserver or onscroll. Except for Safari, we can apply the loading attribute to images and iframes (in Chromium) and the browser will handle lazy loading.

Note that lazy loading can’t be polyfilled since an image is probably loading by the time you check for the loading attribute’s support.

Being able to do this in HTML makes it sound like the attribute doesn’t require JavaScript at all, but it does. From the WHATWG spec:

  1. If scripting is disabled for an element, return false.

    This is an anti-tracking measure, because if a user agent supported lazy loading when scripting is disabled, it would still be possible for a site to track a user’s approximate scroll position throughout a session, by strategically placing images in a page’s markup such that a server can track how many images are requested and when.

I’ve seen articles mention that this attribute is how you support lazy loading “without JavaScript” which isn’t true, though it is true you don’t have to write any.

Browsers can limit features based on user preferences

Some users might opt to heavily restrict browser functionality in the interest of further security and privacy. Firefox and Tor are two browsers that do this through the resist fingerprint setting which does things like reducing the precision of certain variables (dimensions and time), omitting certain variables entirely, limiting or disabling some Web APIs and never matching media queries. WebKit has a document outlining how browsers can approach fingerprint resistance.

Note that this goes beyond the standard anti-tracking features that browsers implement. It’s unlikely that a user will enable this as they would need a very specific threat model to do so. Part of this can be countered with progressive enhancement, graceful degradation, and understanding your users. This limitation is a big issue when you actually need fingerprinting, like fraud detection. So, if it’s absolutely necessary, look for an alternative means.

Screen readers might not relay the semantics of certain elements

Semantic HTML is great for many reasons, most notably that it conveys meaning in markup that software, like screen readers, interpret and announce to users who rely on them to navigate the web. It’s essential for crafting accessible websites. But, at times, those semantics aren’t conveyed—at least how you might expect. Something might be accessible, but still have usability issues.

An example is the way removing a list’s markers removes its semantic meaning in WebKit with VoiceOver enabled. It’s a very common pattern, most notably for site navigation. Apple Accessibility Standards Manager James Craig explains why it’s a usability issue, though, citing the W3C’s Design Principle of Priority of Constituents:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity. In other words costs or difficulties to the user should be given more weight than costs to authors;

Another case where semantics might not be relayed is with emphasis. Take inline elements like strong, em, mark, ins, del, and data—all elements that have semantic meanings, but are unlikely to be read out because they can get noisy. This can be changed in a user’s screenreader’s settings, but if you really want it to be read you can declare it in visually hidden in the content property of either a :before or :after pseudo-element.

To illustrate this I made a brief example to see how NVDA with Firefox 89 and VoiceOver with Safari 14.6 read out semantic elements.

Unlike VoiceOver, NVDA reads out some of the semantic elements (del, ins and mark) and tries to emphasize text by gradually increasing the volume of emphasized text. Both of them have no trouble reading out the :before/:after psudo-elements however. Also, VoiceOver read out the tag’s brackets (greater than, less than), though both screenreaders have the ability to change how much punctuation is read.

To see whether or not you need to emphasize the emphasis, make sure you test with your users and see what they need. I didn’t focus on the visual aspect but the default styling of emphasis elements may be inconsistent across browsers, so make sure you provide suitable styling to go along with it.

Web storage might not be persistent

The WHATWG Web storage specification includes a section on privacy that outlines possible ways to prevent storage from being a tracking vector. One such way is to make the data expire. This is why Safari controversially limits script writable storage for seven days. Note that this doesn’t apply to “installed” websites added to the home screen.


Interesting, isn’t it? Some web features that we might expect to work a certain way just don’t. That isn’t to say that the features are wrong and need to be fixed, but more of a heads up as we write code.

It’s worth examining your own assumptions during development. Critically examine what your users need and factor it in as you make your site. You’re certainly welcome to work around these these as you encounter them, but in cases where you’re unable to, make sure to find and provide reasonable progressive enhancement and graceful degradation. It’s OK if users don’t experience a website the exact same way in every browser as long as they’re able to do what they need to.

That’s my list of things that don’t work the way I expect them to. What’s on your list? I’m sure you’ve got some and I’d love to see them in the comments!

Source link

Leveraging Salesforce Without Using Salesforce

Leveraging Salesforce Without Using Salesforce

I was first introduced to Salesforce during a Gartner Enterprise Architecture summit back in 2008. Full transparency here: the primary reason I attended the presentation was the promise of a cool-looking force.com t-shirt that awaited each of us at the end. 

The apparel item did not disappoint, as I have a few historical vacation photos which include me wearing that very item. Here is one of my favorites back in 2010 with my son, Eric:

What also did not disappoint me was the technology built on what was then the force.com platform. Those days were a bit confusing because there was force.com and salesforce.com. I quickly understood they were often differentiated as noted below:

“salesforce.com is generally used to refer to the CRM functionality (the sales, service and marketing applications) and force.com is generally used to refer to the underlying platform (the database, code, and UI on which all the apps are built)”

TechTarget also provides the following definition here.  As the years have passed, Salesforce has retired the force.com brand, referring to its platform offerings simply as “Salesforce Platform.”  Still, that t-shirt was cool-looking, right?

A couple of years later, I was still working for the company that sent me to the 2008 Gartner summit. It was then that they decided to embrace the use of Salesforce to track items related to the leasing segment of their business. My basic understanding of Salesforce helped establish custom email routing rules that were passing through internal SendMail gateways. However, that was really my only involvement with the Salesforce ecosystem.

2015 and an Early Publication

Fast-forward seven years to 2015. I was now working as a feature developer, in a full-time corporate role, for a very large automotive conglomerate. After a sprint-worth of feature design that was pretty much introducing a lightweight CRM solution, our team received direction from the corporate office that we should adopt Salesforce instead.

For the next six months, our agile team—clearly in the “performing” phase—was successful at moving from an existing CRM solution to utilizing Salesforce. This work inspired one of my first publications on DZone.com:

Into The Development Time Machine

In fact, since that time I have published several articles about Salesforce, some of which are noted below:

Using Salesforce as a Service

While Salesforce is an excellent experience, introducing another user interface is not always an ideal scenario. In fact, back in 2015, our team felt like we were taking a step backward when we presented the (now called “Salesforce Classic”) user interface to consumers who were used to a reactive web design.

Salesforce have evolved their user interface since then, first releasing their proprietary Aura framework, before introducing Lightning Web Components , an implementation of the web components standard that can run on it’s platform, or be used for your own web application. However, there is still the challenge of asking consumers to adopt yet another application into their daily portfolio of technology solutions.

An alternative approach is to simply utilize Salesforce as a service. After all, Salesforce has provided a robust RESTful API for over 10 years now, which allows access to GET, POST, PUT, and DELETE object data as needed.

The focus of this publication is to provide options on how to leverage the Salesforce API while side-stepping the use of the Salesforce client. 

Our Scenario

To put things into context for using the Salesforce RESTful API, consider an example where an existing application is already in place. The application provides a majority of the daily functionality required by its users. A major gap, though, is the contact information regarding current and potential clientele.

The feature team recently discovered that all the necessary information exists in Salesforce and there are processes already in place to maintain those contacts. Early indications are that only minor updates will ever be required to a given contact from the existing application.

This article will focus on completing a research spike to accomplish the following items:

  1. Create a Salesforce instance for prototyping a solution.

  2. Establish a mechanism to retrieve and update contacts in Salesforce.

  3. Determine how authentication will work.

  4. Validate the functionality using Postman or simple cURL commands.

What You Need To Know

Before we get started, there are a few things that I feel like one should know before heading down this path. You know, that “full transparency” thing that I noted regarding the force.com t-shirt in my introduction.

API Limits Exist

The biggest challenge my feature team faced in 2015 was the number of RESTful API calls that Salesforce allows for every client. Below is a screenshot from the API Request Limits and Allocations page:

In our case, the two items noted above were a big concern for our team. In hindsight, given the understanding of Salesforce and the ability to cache data, I am confident the resulting Salesforce instance would have not exceeded those limitations. However, I wanted this article to highlight that element for those who are deciding when to utilize this approach.

Authentication Options

Two authentication approaches were considered for this article:

The determination of when to use either is directly related to the desire (or need) to make requests as a given user that exists in Salesforce. The alternative is to use a service-based approach, where all of the requests originate from a single user that exists in Salesforce.

For this example, the service-based approach will be utilized. As a result, all requests will be completed under the identity of a service-based account in Salesforce.

Integration Options

When connecting to Salesforce, there are several options. Over the last six years, I have been able to utilize the following integration options:

Using MuleSoft and Heroku Connect would provide connectors and deep insight into the Salesforce data domain. While both are excellent solutions, they do require an additional investment since they are subscription-based.

The direct connect option is possible, but a couple of challenges exist. First, service-based authentication is not likely to be an option—because of challenges with housing the login credentials in a secure manner. Secondly, the client will become heavier as Salesforce data is reformatted for digestible use.

As you might expect (and given my publication history), for this example, I am going to utilize the Spring Boot option and leverage the Salesforce RESTful API. I have a high degree of comfort with this approach.

Creating a Salesforce Instance

The first step is to create a free developer instance of Salesforce. I was able to get started using the following URL:


This led to a simple form that I had to fill out online:

Once the form was submitted, I received the following email at the address noted above:

The contents of this email were quite helpful, as it provides the URL to my developer instance of Salesforce, plus my username.

After verifying my account, I was required to set a password. 

Since there will already be contacts in the developer instance, the base setup for Salesforce is complete.

Adding a Connected App

To connect to Salesforce from the Spring Boot service, a new connected app needs to be created.

The following steps were completed using my developer instance of Salesforce:

  1. Navigate to the Setup link

  2. Navigate to Apps → Apps Manager section on the left-hand menu

  3. Select the New Connected App button

  4. Populate the following properties:

    1. Connected App Name to something like “Spring Boot Integration”

    2. API Name (computed value should be fine)

    3. Contact Email (your email address)

    4. API → Enable OAuth Settings = true

    5. Set callback URL to “https://login.salesforce.com/

    6. Use OAuth scopes “Access and manage your data (api)” and “Perform requests on your behalf at any time (refresh_token, offline_access)” (for now)

    7. Use “Relax IP restrictions” (for now)

    8. Use “Refresh token is valid until revoked” (for now)

  5. Save the new connected app

Below, is an example of the connected app that I created:

For clarification, below is an example of the OAuth policies I utilized:

Make sure to note the following items for reference later:

  • Consumer Key value

  • Consumer Secret value

Configure Network Access

An optional (but recommended) step for the prototyping stage is to create a trusted IP range. This can be used both by your instance of the Salesforce client and for the Spring Boot service as well.

Creating a new trusted IP range simply requires knowing your current IP address and following the steps listed below.

  1. Obtain your IPv4 address (example:

  2. Navigate to Security → Network Access in Salesforce Setup

  3. Create a new Trusted IP Range which includes your current IP address (I used start address of and

At this point, Salesforce should be set up and ready for use by the Spring Boot service.

Creating the Spring Boot Service

Using the Spring Initializr from IntelliJ IDEA, a new Spring Boot service called salesforce-integration-service was created with the following dependencies:









The Spring Boot service will utilize the following two features:

  • RESTful functionality

  • Simple abstract caching

Using the values noted above, a Run configuration was created as shown below:

Starting the Spring Boot service will display the following information in the console:

With the Salesforce service started, it is time to add the necessary integration classes and methods.

Integrating with Salesforce

Since the use case for this article is centered on contacts, the following data transformation objects (DTOs) were created in Spring Boot:

@JsonIgnoreProperties(ignoreUnknown = true)
public class Contact {
   public static final String CONTACT_QUERY = "SELECT Name, Title, Department FROM Contact";

   @JsonProperty(value = "Name")
   private String name;

   @JsonProperty(value = "Title")
   private String title;

   @JsonProperty(value = "Department")
   private String department;

   private SalesforceAttributes attributes;

   public String getId() {
       if (attributes != null && attributes.getUrl() != null) {
           return StringUtils.substringAfterLast(attributes.getUrl(), "https://dzone.com/");

       return null;

@JsonIgnoreProperties(ignoreUnknown = true)
public class SalesforceAttributes {
   private String type;
   private String url;

As a result, when a Contact object is returned, it will include a payload similar to what is displayed below:

    "attributes": {
        "type": "Contact",
        "url": "/services/data/v52.0/sobjects/Contact/0035e000008eXq0AAE"
    "id": "0035e000008eXq0AAE",
    "Name": "Rose Gonzalez",
    "Title": "SVP, Procurement",
    "Department": "Procurement"

Building upon these objects, the Spring Boot RESTful service will be designed as shown below:

Introducing Caching

In order to reduce the number of API calls required to retrieve data, I configured the abstract caching included in Spring Boot this way:

public List<Contact> getContacts() throws Exception {

@CacheEvict(value = "contacts", allEntries = true)
public Contact updateContact(String id, PatchUpdates patchUpdates) throws Exception {

The methods to retrieve contact information use the @Cacheable annotation to set/retrieve from the cache when possible. For simplicity in this example, when a contact is updated, the entire cache is evicted using the @CacheEvict annotation.

Adding a Logging Interceptor

To provide insight into the performance of the Spring Boot RESTful service, I created a simple logging interceptor to write messages to the console as API calls are processed. 

The first step is to establish the LoggingInterceptor class:

public class LoggingInterceptor implements HandlerInterceptor {
   private final String loggedStartTimeKey = "_loggedStartingTime";
   public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) {
       long startTime = System.currentTimeMillis();
       request.setAttribute(loggedStartTimeKey, startTime);
       log.info("Request Started: method={} path={}", request.getMethod(), request.getRequestURI());
       return true;

   public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, @Nullable Exception ex) {
       long loggedStartTime = (long) request.getAttribute(loggedStartTimeKey);
       long endTime = System.currentTimeMillis();
       long timeTakenMs = endTime - loggedStartTime;
       log.info("Request Completed: method={} path={} timeTaken={} (milliseconds)", request.getMethod(), request.getRequestURI(), timeTakenMs);

Next, I updated the WebConfig class to use the interceptor:

public class WebConfig implements WebMvcConfigurer {
   public void addInterceptors(InterceptorRegistry registry) { registry.addInterceptor(getLoggingInterceptor()).addPathPatterns("/**"); }

   public LoggingInterceptor getLoggingInterceptor() {
       return new LoggingInterceptor();

   public ObjectMapper objectMapper() {
       return new ObjectMapper().enable(SerializationFeature.INDENT_OUTPUT);

   public CloseableHttpClient closeableHttpClient() {return HttpClients.createDefault();}

During the validation section, the log events shown below will be included in the results.

Validating Functionality

With the Spring Boot service ready for use and running, the next step is to validate our expected functionality.

Getting Salesforce Contacts

A list of contacts can be retrieved from Salesforce via the Spring Boot RESTful service using the following cURL command:

curl --location --request GET 'http://localhost:9999/contacts'

Once submitted, we receive an HTTP 200 (OK) response, with a full list of contacts using the Contact DTO created in Spring Boot: 

        "attributes": {
            "type": "Contact",
            "url": "/services/data/v52.0/sobjects/Contact/0035e000008eXq0AAE"
        "id": "0035e000008eXq0AAE",
        "Name": "Rose Gonzalez",
        "Title": "SVP, Procurement",
        "Department": "Procurement"
        "attributes": {
            "type": "Contact",
            "url": "/services/data/v52.0/sobjects/Contact/0035e000008eXqJAAU"
        "id": "0035e000008eXqJAAU",
        "Name": "Jake Llorrac",
        "Title": null,
        "Department": null

Please note: To keep the result set concise, only two contact items are shown above.

Viewing the Spring Boot RESTful service logs presents the following information:

2021-06-29 09:03:46.945  INFO 27343 --- [nio-9999-exec-1] c.g.j.s.interceptors.LoggingInterceptor  : Request Started: method=GET path=/contacts
2021-06-29 09:03:48.079  INFO 27343 --- [nio-9999-exec-1] 
2021-06-29 09:03:47.667 DEBUG 27343 --- [nio-9999-exec-1] c.g.j.s.utils.BearerTokenUtilities       : salesforceLoginResult=SalesforceLoginResult(data_goes_here)
2021-06-29 09:03:48.041 DEBUG 27343 --- [nio-9999-exec-1] c.g.j.s.services.ContactService          : contacts=[contact_data_goes_here]
c.g.j.s.interceptors.LoggingInterceptor  : Request Completed: method=GET path=/contacts timeTaken=1134 (milliseconds)

The initial request took a little over one second to process. To validate that the abstract caching works correctly, I executed the same cURL command again. 

This second time, the results were much faster, and there were no calls required to the BearerTokenUtilities or the ContactService:

2021-06-29 09:10:04.928  INFO 27343 --- [nio-9999-exec-5] c.g.j.s.interceptors.LoggingInterceptor  : Request Started: method=GET path=/contacts
2021-06-29 09:10:04.930  INFO 27343 --- [nio-9999-exec-5] c.g.j.s.interceptors.LoggingInterceptor  : Request Completed: method=GET path=/contacts timeTaken=2 (milliseconds)

Getting a Single Salesforce Contact

To retrieve information regarding a single contact from Salesforce, the ID of the contact is required and a similar version of the original cURL command is executed:

curl --location --request GET 'http://localhost:9999/contacts/0035e000008eXq0AAE'

Upon submission, we receive an HTTP 200 (OK) response, with the single Contact DTO created in Spring Boot provided:

    "attributes": {
        "type": "Contact",
        "url": "/services/data/v52.0/sobjects/Contact/0035e000008eXq0AAE"
    "id": "0035e000008eXq0AAE",
    "Name": "Rose Gonzalez",
    "Title": "SVP, Procurement",
    "Department": "Procurement"

This URI is helpful when only a single contact is required.

Updating Contact Information

In our use case, there is a need to make only minor changes to the contact data in Salesforce.  One example is to change the contact’s title value. In order to make an update to the contact’s title attribute, we would use the following cURL command:

curl --location --request PATCH 'http://localhost:9999/contacts/0035e000008eXq0AAE' 
--header 'Content-Type: application/json' 
--data-raw '{
    "Title": "SVP, Procurement 2"

In this example, we are merely changing the title from “SVP, Procurement” to “SVP, Procurement 2.”  

Upon submitting this request, we receive an HTTP 202 (Accepted) response, along with the updated Contact DTO:

    "attributes": {
        "type": "Contact",
        "url": "/services/data/v52.0/sobjects/Contact/0035e000008eXq0AAE"
    "id": "0035e000008eXq0AAE",
    "Name": "Rose Gonzalez",
    "Title": "SVP, Procurement 2",
    "Department": "Procurement"

Since the cache is fully evicted, if we run the original cURL to retrieve all the contacts in Salesforce, the following logs will appear:

2021-06-29 09:18:39.853  INFO 27343 --- [nio-9999-exec-5] c.g.j.s.interceptors.LoggingInterceptor  : Request Started: method=GET path=/contacts
2021-06-29 09:18:40.314 DEBUG 27343 --- [nio-9999-exec-5] c.g.j.s.utils.BearerTokenUtilities       : salesforceLoginResult=SalesforceLoginResult(data_goes_here)
2021-06-29 09:18:40.416 DEBUG 27343 --- [nio-9999-exec-5] c.g.j.s.services.ContactService          : contacts=[contact_data_goes_here]
2021-06-29 09:18:40.418  INFO 27343 --- [nio-9999-exec-5] c.g.j.s.interceptors.LoggingInterceptor  : Request Completed: method=GET path=/contacts timeTaken=565 (milliseconds)

Because of the calls to the BearerTokenUtilities and the ContactService, we’ve validated that the cache was evicted.

Conclusion (Looking Ahead)

Starting in 2021, I have been trying to live the following mission statement, which I feel can apply to any IT professional:

“Focus your time on delivering features/functionality which extends the value of your intellectual property. Leverage frameworks, products, and services for everything else.”

– J. Vester

Like my feature team discovered in 2015, Salesforce provides an amazing CRM solution—which met the needs of our project at the time and still meets that corporation’s needs today. However, there are times when using the entire Salesforce ecosystem is not preferable.

In this article, we created a Spring Boot service example to build upon the well-established Salesforce RESTful API, validating functionality using simple cURL commands. Where possible, the results were cached in order to minimize the use of API calls to the underlying Salesforce API.

If you are interested in the source code used for the Spring Boot service, simply navigate to the following repository on GitLab:


Future articles will provide examples of how to leverage this Spring Boot service for the following JavaScript-based clients:

These articles will provide high-level examples of how to integrate Salesforce into your current application—without users ever having to log in to Salesforce.

Have a really great day!

Source link

Creation screenshot.

Getting Started With WebdriverIO Typescript Jasmine

What is WebdriverIO?

WebdriverIO is a progressive automation framework built to automate modern web and mobile applications. It simplifies the interaction with your app and provides a set of plugins that help you create a scalable, robust, and flakiness test suite. WebdriverIO is built on top of Selenium NodeJS bindings.

Webdriver IO is Open Source Framework managed by OpenJS foundation and follows W3 framework architectural standards

In Comparison with Selenium, WebdriverIO is better as it provides you a readymade framework with easy setup and configuration.

What Are the Key Features of Webdriverio?

Let’s discuss the advantages of using WebdriverIO in Test Automation.

Extendable: Adding helper functions, or more complicated sets and combinations of existing commands is simple and really useful.

Cross Browser Testing: WebdriverIO supports almost all different types of browsers including Safari, Chrome, Edge, Firefox, etc.

Cross-Platform Support: WebdriverIO is NodeJS based automation framework it supports all major Operating Systems including Mac and Windows. So, you can run your tests on different platforms and ensure your application is working as expected.

Native Mobile Application Support: WebdriverIO not only supports Web Applications but also supports Mobile applications so you can test all mobile applications by configuring accordingly.

Easy Set up: Setting up WebdriverIO is super easy; it’s just some package installation and then configuring the config file. We are going to show that in this tutorial.

Assertion Library Support: WebdriverIO supports major assertion libraries like Jasmine, Mocha, etc. This helps you to write your automation framework seamlessly.

Community Support: Community support for this WebriverIO is great; there are tons of articles and knowledge base available over the internet so that you can learn and enhance your own framework.

This Detailed tutorial explains the following:

  • How to set up WebdriverIO Page object model project from Scratch using Typescript and Jasmine.
  • How to Install WebdriverIO aka WDIO CLI in your project.
  • How to Configure Typescript in WebdriverIO.
  • How to Configure wdio.conf.ts in WebdriverIO Project.
  • How to Create first Page Object File in WebdriverIO and Typescript.
  • How to Create the First Test in WebdriverIO.
  • How to Execute WebdriverIO (WDIO).
  • How to View the Results in WebdriverIO CLI.

How to Set Up Webdriverio Typescript Framework From Scratch

In this detailed tutorial, we are going to explain how to set up the WebdriverIO (WDIO) Test Automation Project using Typescript and Jasmine with Page Object Model in an easy way.


Step 1: Creating a New Folder (Ex: WebdriverIOTypScript)

Navigate to any of your drives. Create a fresh new folder (Ex: WebdriverIOTypescript).

Step 2: Creating a Package.json for Your WebdriverIO Project

Create a pacakge.json file.

In your newly created Project Folder, open the command prompt and type:

npm init

The above command asks you a set of pre-defined questions. Just hit [Enter] [Enter] unless you wish to specify. Once everything is done, this will create pacakge.json a file in your Project Folder.

Creation screenshot.

Step 3: Opening the Project Folder in Visual Studio Code IDE

In Visual Studio Code, click File > Open > Choose Newly Created Project Folder > Click on Select Folder.

Opening the project folder screenshot.

Step 4: Installing the WebdriverIO Command Line Interface, Also Known as WDIO CLI

We have opened our project folder in Visual Studio Code IDE, so let’s start with the installation WebdriverIO. In order to install webdriverIO, we need to use npm pacakge @wdio/cli.

In your Visual Studio Code, Open Terminal.

 Navigate to Terminal menu >Click on New Terminal.

Enter the below command to install Webdriverio on your machine:

npm install - save-dev @wdio/cli

Wait for the Installation to finish!

Finishing the installation screenshot.

Step 5: Setting Up the WebdriverIO for Your End to End Test Automation Project

Once WeddriverIO is installed, we need to do the first-time setup of WDIO using the wdio config command.

From your Visual Studio Code Terminal, enter the below command:

npx wdio config

The command-line prompts for a set of Options Answer them like the below:

  • Where is your automation backend located? On my local machine.
  • Which framework do you want to use? Jasmine.
  • Do you want to use a compiler? TypeScript (https://www.typescriptlang.org/).
  • Where are your test specs located? ./test/specs/**/*.ts
  • Do you want WebdriverIO to autogenerate some test files? No.
  • Which reporter do you want to use? spec
  • Do you want to add a service to your test setup? Chrome driver.
  • What is the base URL? http://localhost

Step 5 screenshot.

If you look at the above point, we are installing WebdriverIO for our Test Automation project, Which uses Typescript, Jasmine

Once the above steps are complete, It will create a default configuration for you

Step 6: Creating the Directory Structure for the WebdriverIO Typescript Project

We are creating WebdriverIO, a Typescript project with a page object model, so we need to follow the correct folder structure as given below:

  • Create a Folder with name test in your root Project Folder.
  •  Under the test folder, create two folders: pages and specs.

The folder structure should look like the below:


Step 6 screenshot.

Step 7: Installing the Typescript and ts-node npm Packages for the WebDriverIO Project

In your Visual Studio Code Terminal, type the below command to install Typescript and ts-node:

npm install typescript ts-node --save-dev

Note: These packages should have been installed already if you choose Typescript Option during webdriver set up time. Just ensure it is installed correctly.

Step 8: Creating the tsconfig.json File for the WebdriverIO Project

Open Visual Studio Code Terminal, and type the below command:

npx tsc --init

The above will create a tsconfig.json file in your Project Root Directory.

Creating a file in the directory screenshot.

Step 9: Configuring the tsconfig.json File in the WebdriverIO Project

Remove the default generated code, and replace it with the below code in tsconfig.json:

  "compilerOptions": {
    "target": "es2019",
    "types": [
  "include": [

Step 10: Configuring the wdio.conf.ts File

The wdio.conf.ts file already has a lot of self-generated code, so we might not need all of those. You can copy and paste the below code:

export const config: WebdriverIO.Config = {
 specs: [
 capabilities: [{
        browserName: 'chrome', 
        maxInstances: 1,       
    framework: 'Jasmine',  
    jasmineOpts: {
        defaultTimeoutInterval: 120000,
    autoCompileOpts: {
        autoCompile: true,
        // for all available options
        tsNodeOpts: {
            transpileOnly: true,
            project: 'tsconfig.json'

Step 11: Writing Your First Page-Object File for the WebdriverIO Typescript Project

Let us try to create a Simple Google Search Test case.

Navigate to the tests folder, open the pages folder, and create a new file called example.page.ts.

So, the location of example.page.ts is test/pages/example.page.ts.

Add the below code:

class ExampleClass{
    get  searchInput() { return  $("input[name='q']")}
    get searchButton() {return $('input[name="btnK"]')}
    get firstResult() {return $('(//h3)[1]')}
export default new ExampleClass()

Adding the code screenshot.

Step 12: Writing Your First Spec File for the WebdriverIO Typescript Project

Navigate to tests/specs/ ,and create new file called example.spec.ts

Copy and paste the below code into example.spec.ts:

import ExampleClass from "../pages/example.page"
describe('Google Search', () => {
  it('should search for sepcified text',async() => {
    await browser.url('https://www.google.com');
    await (await ExampleClass.searchInput).setValue("Webdriver IO Search Example");
    await browser.keys('Enter')
    await expect(await((await ExampleClass.firstResult).getText())).toContain("WebdriverIO")

Step 12, pasting the code screenshot.

Step 13: Executing the WDIO Typescript Tests

Once you have completed all the above tests, run your tests with the below command:

npx wdio run ./wdio.conf.ts

Step 14: WDIO Test Results in Console

Tests start executing, and you will see the results in the command line:

Results in the command line screenshot.

Issues You Might Face in Setting-Up WebdriverIO

Why is the Chrome Browser not launching in WebdriverIO?

This issue is mostly related to your configuration file, i.e wdio.conf.ts. Check your settings properly.

Why am I getting “.setText()? .click() is not a function in WebdriverIO?”

If you are using the async mode, usually you will get this; prefix your web element with await issue will be resolved.

Why is my WebdriverIO test not executing?

Sometimes there are a lot of Selenium Webdriver instances running, which might cause your tests to behave weirdly when you execute. You might have to restart the system or kill all the instances of the web drivers.

Frequently Asked Questions on WebdriverIO

  • Is WebdriverIO selenium Based?

        Yes, WebdriverIO uses Selenium NodeJS Bindings Internally.

  • Does WebdriverIO Support Native Mobile Apps?

          Yes, WDIO Supports Native Mobile Apps.

  • What are the supported Selectors in WebdriverIO?

          WebdriverIO supports all major selectors including CSS selectors and Xpath.

  • What are the WebdriverIO Supported Browsers?

          WebdriverIO supports all major browsers:

  • Chrome – ChromeDriver
  • Firefox – Geckodriver
  • Microsoft Edge – Edge Driver
  • Internet Explorer – InternetExplorerDriver
  • Safari – SafariDriver
  • Can I Configure Webdriver Project to CI/CD tools like Jenkins, Azure DevOps, etc?

Yes, WebdriverIO Project can be configured to CI/CD tools.

  • What are the frameworks or assertion libraries WebdriverIO supports?

 WebdriverIO Currently supports Mocha, Jasmine, and Cucumber assertion libraries or frameworks.

  • Does WebdriverIO support Run Tests in Parallel?

Yes, WebdriverIO Supports Parallel Test Runs. You just need to configure your wdio.conf file for that.

  • How do I run a single spec or test file in WebdriverIO?

You can use the below command to run your single spec or tests in webdriverIO:

npx wdio run ./wdio.conf.js –spec test/specs/example.e2e.js

  • How do I take screenshots on WebdriverIO?

WebdriverIO provides screenshot capability. View this detailed article on how to take screenshots in webdriverIO.

Encourage me to write more articles by buying me a coffee.

If you are looking for any help, support, or guidance, contact me.

Source link

Hashnode: A Blogging Platform for Developers

Hashnode: A Blogging Platform for Developers

Hashnode is a free developer blogging platform. Say you’ve just finished an ambitious project and want to write about 10 important lessons you’ve learned as a developer during it. You should definitely blog it—I love that kind of blog post, myself. Making a jump into the technical debt of operating your own blog isn’t a small choice, but it’s important to own your own content. With Hashnode, the decision gets a lot easier. You can blog under a site you entirely own, and at the same time, reap the benefits of hosted software tailor-made for developer blogging and be part of a social center around developer writing.

Here are some things, technologically, I see and like:

  • Write in Markdown. I’m not sure I’ve ever met a developer who didn’t prefer writing in
  • Its not an “own your content” as in theoretically you could export content. Your content is in your GitHub repo. You wanna migrate it later? Go for it.
  • Your site, whether at your own custom domain or at a free subdomain, is hosted, CDN-backed, and SSL secured, while being customizable to your own style.
  • Developer specific features are there, like syntax highlighting for your code.
  • You get to be part of on-site community as well as a behind-the-scenes Discord community.
  • Your blog is highly optimized for performance, accessibility, and SEO. You’ll be hitting 100’s on Lighthouse reports, which is no small feat.

Your future biggest fans are there waiting for you ;).

Example of my personalized Hashnode newsletter with the best stuff from my feed.

The team over there isn’t oblivious to the other hosted blogging platforms out there. We’ve all seen programming blog posts on Medium, right? They tend to be one-offs in my experience. Hashnode is a Medium-alternative for developers. Medium just doesn’t cater particularly well to the developer audience. Plus you never know when your content will end up being behind a random paywall, a mega turn-off to fellow developers just trying to learn something. No ads or paywalls on Hashnode, ever.

The smart move, I’d say, is buying a domain name to represent yourself right away. I think that’s a super valuable stepping stone in all developer journeys. Then hook it up to Hashnode. Then wherever you do from that day forward, you are building domain equity there. You’re never going to regret that. That domain strength belongs entirely to you forever. Not to mention Medium wants $50/year to map a domain and DEV doesn’t let you do it at all.

But building your own site can be a lonely experience at first. The internet is a big place and you’ll be a small fish and first. By starting off at Hashnode, it’s like having a cheat code for being a much bigger fish right on day one.

DEV is out there too being a developer writing hub, but they don’t allow you to host your own site and build your own domain equity, as Hashnode does, or customize it to your liking as deeply.

Hashnode is built by developers, for developers, for real. Blogging for devs! The team there is very interested and receptive to your feature requests—so hit them up!

One more twist here that you might just love.

Hashnode Sponsors is a new way your fans can help monetize your blog directly, and Hashnode doesn’t take a cut of it at all.

Source link

r/graphic_design - Help Naming a Modern Donut Shop

Help Naming a Modern Donut Shop : graphic_design

Hi there! I’m a freelance designer, and I’m currently re-branding myself and also doing an overhaul of my portfolio in an effort to attract the type of clients I want to work with. While doing that I’m creating some fictional brands. The first being a modern donut shop.

Attached to this is a very early moodboard on what I’m thinking. I really want something that is modern and caters to a young & trendy crowd. I definitely want it to have a little adge as well. The thing throwing me off the most is coming up with a name that doesn’t feel dated or just overall cheesy. I’ve done some research online and a lot of people suggest just boring and cheesy names like “Donut Empire” or “Daniels Donuts”…That just feels off from what I’m trying to do. I was hoping to have a 1 or 2 word name so I could achieve the style logo found on the moodboard. I’m fine with either spellings of donut (donut or doughnut).

I was playing around with the idea of “Dough & Co.” and that is probably the first name I slightly like. So that should hopefully help show where my head is at. Thank you soooo much in advance to anyone
who is able to help brainstorm with me! 🙂

r/graphic_design - Help Naming a Modern Donut Shop

Source link

React 18 Alpha With Snowpack and Vercel

Set Up React 18 Alpha With Snowpack and Vercel

React 18 Alpha With Snowpack and Vercel

If You Prefer Watching a Video…

Be sure to Subscribe to the Official Code Angle Youtube Channel for more videos.

Table of Contents

  1. Introduction
  2. Installation and Set up of React Using Snowpack
  3. Folder Structure
  4. Code Overview
  5. Running the Application
  6. Deployment Process Using Vercel
  7. Conclusion


Earlier this month the React Team released some updates concerning the release of React 18. These updates include the following:

  • Work has begun on the React 18 release, which will be the next major version.
  • A working group has been created to prepare the community for the gradual adoption of new features.
  • An Alpha version has already been published for library authors to try and provide valuable feedback.

This tutorial aims to set up the React 18 Alpha version using SnowPack, a lightning-fast front-end build tool, designed for the modern web. Then we deploy on Vercel.

Installation and Setup of React 18 Alpha Using Snowpack

First, you need to have Node.js installed, once that is done then you can now install Snowpack. You can use the command below to install Snowpack.

npm install snowpack

Once installed, you can head to a directory where you want to put your new project.

Now run the following command in your terminal to create a new directory called react-snowpack. This will automatically generate a minimal boiler plater template.

npx create-snowpack-app react-snowpack --template @snowpack/app-template-minimal

You can now head to the new directory with the following command:

cd react-snowpack

Once inside this directory, we can finally install the React 18 Alpha version by running the command below.

npm i react@alpha react-dom@alpha

Once this is done, you can check your package.json file to confirm React 18 Alpha has been installed. It should look something like what we have below.

  "dependencies": {
    "react": "^18.0.0-alpha-cb8afda18-20210708",
    "react-dom": "^18.0.0-alpha-cb8afda18-20210708"

Folder Restructure

React makes use of a templating language called JSX. JSX stands for JavaScript XML. It is an inline markup that looks like HTML that gets transformed to JavaScript at runtime.

The First step towards the folder restructure is to rename the index.js file with a jsx extension like so, index.jsx. Doing this will allow Snowpack to know that we are running a React project.

In the end, we should have the folder structure below.

> public
  > index.css
  > index.html
> src
  > App.jsx
  > index.jsx

Code Overview

We are going to have code modification in four files (index.html, App.jsx, index.jsx, and snowpack.config.mjs) before we start up the app and deploy it on Vercel.


<!DOCTYPE html>
<html lang="en">

  <meta charset="utf-8" />
  <meta name="viewport" content="width=device-width, initial-scale=1" />
  <meta name="description" content="Starter Snowpack App" />
  <link rel="stylesheet" type="text/css" href="https://dzone.com/index.css" />
  <title>Starter Snowpack App</title>

  <div id="root"></div>
  <script type="module" src="/dist/index.js"></script>


In the index.html code, three things have to be noted:

  • The id is called root which we will refer to in the index.jsx file.
  • In the script tag, we have a type called module to enable snowpack to know we will be making use of ES6 syntax.
  • Also in the script tag, we have an src attribute to signify the path of our deployment directory which will be configured in the snowpack.config.mjs file.


import React from "react";

function App() {
  return (
        <p>React 18 Alpha Setup Deployed on Vercel with SnowPack</p>
export default App;

Above in the app.jsx file, we generate a simple React boilerplate template using a functional component.


import React from "react";
import ReactDOM from "react-dom";
import App from "./App";

const rootElement = document.getElementById("root");
const root = ReactDOM.createRoot(rootElement);
root.render(<App />);

In the index.jsx file, we did three things to enable us startup the app.

  • First, we import React, ReactDOM, and the App.jsx file.
  • Then we created a variable to get the id in the index.html file.
  • Finally, we made use of the new createRoot API in React 18 to render the application.


/** @type {import("snowpack").SnowpackUserConfig } */
export default {
  mount: {
    /* ... */
    public: "https://dzone.com/",
    src: '/dist'
  plugins: [
    /* ... */
  routes: [
    /* Enable an SPA Fallback in development: */
    // {"match": "routes", "src": ".*", "dest": "/index.html"},
  optimize: {
    /* Example: Bundle your final build: */
    // "bundle": true,
  packageOptions: {
    /* ... */
  devOptions: {
    /* ... */
  buildOptions: {
    /* ... */

Every Snowpack app makes use of the snowpack.config.mjs file for any configurations like the deployment process. In this project, we will only edit the mount object by adding the public and src keys.

These serve as a pointer to the path where our deployment folder will be built when we run the build command.

Running the Application

Now with all our files saved, we can head back to our terminal and run the start command npm run start, which will produce the page below in the browser.

Application Running Screen in Browser

Now our React 18 alpha app is successfully up and running.

Deployment Process Using Vercel

“Vercel enables developers to host websites and web services that deploy instantly and scale automatically all without any configuration”. -Vercel Documentation

The first step towards deployment is running the command below at the root of our project.

npm run build

This will generate a build directory. Inside the build directory is a dist folder that contains the code we will push to Vercel.

Next up we do the following:

1. Install Vercel 

To do this we run the command npm i -g vercel

2. Log Into Vercel

After installing Vercel globally on your machine. Type vercel in the terminal. This will prompt you to log into your account if you are not already logged in.

3. Project Setup and DeploymentProject Setup and Deployment UI

To summarize the prompt question in the image above, the following questions will be asked:

Vercel will now build the application, installing all dependencies in the process. When the installation is done, a inspect link will be available in the terminal. With this link, we can access the Vercel dashboard to see our deployed app.

Deployed Application in Vercel Dashboard 

4. Open the Deployed Project

You can now visit the newly deployed project by clicking on the “visit” button on your dashboard shown in the image above.

Opening Deployed Project


You can find the deployed code in my GitHub account.

Source link

TreeTable Screenshot

Popular JavaScript TreeGrid Components for Productive Data M…

With the rapid advancement of information technologies, it is hard to imagine a business web application without the ability to present data in a tabular format. Every day business people are exposed to large amounts of information that may also require hierarchical division. Without using special tools such as a TreeGrid (also known as TreeTable), it can be very time-consuming to analyze big data sets and make the right decisions.

TreeGrid is a user interface element that helps to show complex data in rows and columns with expandable/collapsible nodes and enables users to interact with it. It combines qualities of standard DataGrid and Tree components. The main peculiarity of TreeGrid is that it allows you to group arrays of data hierarchically, thereby contributing to more convenient work with the given information. This functionality is highly demanded in financial and analytic systems, reporting tools, CRMs, etc.

If you are involved in creating interfaces for a business-oriented web solution, it will be helpful for you to read this article. Here you will become familiar with a collection of commercial JavaScript TreeGrid components that can be integrated into apps based on popular JavaScript frameworks and provide end-users with a variety of useful features for managing data.


DHTMLX TreeGrid is an easy-to-use JavaScript UI control designed for displaying big data in hierarchical tables of any complexity without performance limitations. It comes with a wide range of configuration options that serve to fine-tune all table elements to your needs. For instance, you can specify the size of your tree grid and adjust the height and width of columns to their content automatically, “freeze” columns, insert multiline content and custom HTML items in cells, and much more.

TreeTable Screenshot

This JavaScript TreeTable also allows working with numeric values of various formats and automatically performing calculations (min, max, average) in the table’s footer/header. If it is necessary to clarify any information in the tree table, you can apply custom tooltips. When talking about interactivity, end-users can manipulate data by selecting rows, resizing columns, moving rows and columns by drag-and-drop, sorting and filtering data, and via inline editing. It is also possible to work with table data offline after exporting it to Excel. It renders equally well on desktops and touch devices. The grid look is easily modified via CSS.

Easy to follow initialization process and TypeScript support help to make DHTMLX TreeGrid a part of your app much faster. This control is delivered in a bundle with other UI widgets included in the  DHTMLX Suite package or as a stand-alone component.

Trial version: DHTMLX TreeGrid

Price: from $509

Webix TreeTable

Webix TreeTable is a JavaScript widget with a responsive design that helps to arrange data in hierarchical tree-like structures. In fact, this tool encapsulates the properties and methods of two other Webix data management widgets (DataTable and Tree) and enhances their functional capabilities. It enables you to expand/collapse even entire tables. Thus, it is possible to compactly place many tables on a single page without resorting to pagination. 

Webix TreeTable works seamlessly with large amounts of multidimensional data thanks to dynamic loading. Multiple filtering and sorting options give users opportunities to quickly find required pieces of information in the table and, if necessary, edit them on the fly. Various selection and copy-paste modes make it much easier to borrow data from the table. The widget also allows utilizing math formulas and charts (Bar, Pie, Spline, etc.) in the grid. The list of available data exporting formats includes PDF, PNG, Excel. The documentation page provides more interesting details on the practical usage of this JS widget and its features.

Trial version: Webix

Price: from $849 (for a full package of UI widgets)

EJS TreeGrid

EJS TreeGrid is a DHTML component that provides different options for organizing data on an HTML page, including table, bar chart, grid, tree view, and as its name suggests a tree grid. Written in pure JavaScript, this component enables you to configure a tree table in accordance with your requirements and equip it with core functions typical for this kind of UI tool (sorting, filtering, grouping, searching). The tree table built with EJS TreeGrid can include any number of nesting levels in cells. The component supports various cell types, paging types, editing masks, and formatting values. 

EJS TreeGrid Screenshot

EJS TreeGrid also gives a chance to add some extra tools to the grid such as a Gantt chart, calendar, rich text editor. You can work with external objects like Flash Adobe or custom JS objects in the tree table. It is also possible to set animations for various TreeGrid actions. The tree grid interface can be adapted to various languages using the localization feature (including the RTL option for Middle East languages). The appearance of the tree grid is fully customizable via CSS styles. The table content can be saved to Excel or PDF format and printed if needed.  If you want to learn more details about EJS TreeGrid, check out the documentation section.

Trial version: EJS TreeGrid

Price: from $600


jqxTreeGrid is a part of the jQWidgets library used to lay out data utilizing a tree-like setup. This lightweight jQuery widget offers a range of core features with flexible configuration for manipulating data the way you need. Large volumes of data can be broken down into smaller parts for more convenient navigation using the paging feature. Load on Demand (also known as a virtual mode) is one more function that helps to ensure optimal performance with big data in tree grids. With this feature on board, child rows of the tree are generated and initialized only when the parent rows are opened. Other noteworthy features of this widget are pinned columns, aggregates, custom editors, cell formatting, custom cell rendering.

jqxTreeGrid Screenshot

jqxTreeGrid supports multiple data binding and exporting options. You can make the tree grid interface understandable to users from different countries by enabling various locales. Utilizing customizable default themes, you can create a unique design for your tree grid. If you want to test all capabilities of the jqxTreeGrid widget in practice, there is a special jsEditor tool.

Trial version: jQWidgets

Price: from $199 (for a full package of UI widgets)

Ignite UI Tree Grid

The Ignite UI library provides a number of UI components for the faster accomplishment of various web development goals, including hierarchical presentation of data. Two Ignite UI tools suit this purpose, namely – Hierarchical Grid and Tree Grid. Both grids are similar in terms of functionality, as they support the main features that are commonly expected from a grid component such as sorting, filtering, in-cell editing. But the Tree Grid is a more preferable option when building a table where the parent and child nodes have the same structure or if you want to offer end-users a simpler experience. 

Ignite UI TreeGrid Screenshot

Like in the case with jqxTreeGrid, good performance with large data sets is ensured by implementing pagination and Load on Demand features. Infragistics also equips developers with two online tools to facilitate the work with Ignite UI grid components. The HTML5 Page Designer tool helps to try Ignite UI widgets in action with simple drag-and-drop manipulations, while the Theme Generator allows tuning the look and feel of the tree grid to your liking. Online documentation will give you a clear idea of how to use the potential of this JS component to the maximum.

Trial version: Ignite UI

Price: from $849 (for a full package of UI widgets)

Syncfusion Tree Grid 

Syncfusion Tree Grid is a JavaScript control dedicated to presenting self-referential hierarchical data in the form of tables. It is one of the numerous ready-made UI components included in the Essential JS 2 library. This tool comes with a set of useful features that are crucial for effective data management such as sorting, filtering, editing, aggregations, stacked headers, etc. Thanks to the mobile-optimized design, Syncfusion-based tables are displayed well on devices with different resolutions and screen orientation. Using special templates, you can create custom grid elements (headers, toolbars, etc.). As for the tree grid appearance, Syncfusion provides not only a package of default themes but also the Theme Studio application for defining your own style. 

Syncfusion TreeGrid Screenshot

Data can be loaded in the table from local and remote sources (JSON, RESTful, OData, and WCF services). The control relies on several performance-related techniques. You can apply row virtualization and infinite scrolling to improve user experience with large bundles of data. In addition, it is also possible to enable the immutable mode that boosts the tree grid re-rendering performance. The control also supports localization and internationalization libraries that help to make the text content and date/number objects in the tree table understandable to users from different countries. If users require a hard copy of the information presented in the grid, export it in PDF, Excel, CSV formats. More details on Syncfusion Tree Grid and how to get started with it in real projects can be found on the documentation page.

Trial version: Syncfusion

Price: from $995 (for a full package of UI widgets)

Final words

Summarizing the above, we can say that a tree table is one of the most important and complex UI components that can be complemented with numerous features. Therefore, integrating this functionality in a web application from scratch seems like not the best decision, especially if you don’t have much time. In my opinion, it is more effective to use any of the reviewed JavaScript components. Which one to choose? Follow three simple steps: select several products that fit your budget, test them using trial versions, opt for the most suitable option. If you have in mind any other JavaScript TreeGrid tools, feel free to share your suggestions in the comment section.

Source link

r/graphic_design - Can I get some insight on how I might go about recreating a design a friend of mine had made a while ago?

Can I get some insight on how I might go about recreating a …

A good friend of mine knew this girl that would make her some designs here and there but unfortunately she stopped doing it, and doesn’t have any of her old work. My friend wants one of the designs so she can put it on some shirts but she doesn’t know how to make it herself and I don’t either.
I hope this is the right place to ask or post. Please guide me to somewhere I can ask for advice like this if not.

r/graphic_design - Can I get some insight on how I might go about recreating a design a friend of mine had made a while ago?

This is the only image of the design. The color isn’t part of it just the tree, font and horse. Font says Walnut Tree and Farm with a Walnut Tree and a 5 Gaited horse.

Source link

Top 10 Web Developer Communities Developers Should Join in 2...

Top 10 Web Developer Communities Developers Should Join in 2…

Don’t want to stop learning, improve your knowledge every day as a web developer. Here are the top 10 web developer communities that every developer should join.

  1. StackOverflow
  2. GitHub
  3. Hackernoon
  4. Hashnode
  5. HackerNews
  6. FreeCodeCamp
  7. Dev
  8. CodeProject
  9. IndieHackers
  10. Medium

Web development technologies update every day, hence the software development field is becoming considerably challenging. Therefore you need some sources that can help as the ultimate support system. Where you can share knowledge, ask questions, discuss new things, review codes, learn, etc.

Communities are the collaborative platform for the tech-savvy, expert, competent, and beginner. Where knowledge, individual experience, failures, and skills to be shared to help all other members.

“Learn from the mistakes of others. You can’t live long enough to make them all yourself.” ― Eleanor Roosevelt

I have curated a list of the top 10 web development communities that every individual developer or web development company should join.

Stack Overflow,  a public platform for those who want to learn code, share knowledge, and build their career. It’s a forum posting site where you can ask questions and give answers to tons of web development and computer programming topics.

You can ask many questions every day. Most of the time you’ll get the answer from already asked questions. You can share bugs, errors, issues, or blocks of code and answer a question using your knowledge of programming.

  • Monthly visitors – 100+ Million
  • Question asked to-date – 21+ Million
  • Times a developer got help – 50.6+ Billion

GitHub is a collaborative communication forum site where 65+ million developers working together, share thoughts, ask questions, and help to build projects. You can follow the discussion you are interested in and can share your project with other members to discuss.

GitHub is one of the most patronized and authoritative forum communities globally.

  • Organizations using GitHub – 3+ million
  • Repositories – 200+ million
  • 72% are Fortune 50

Hackernoon is the website for technologists where you can read, write, and publish technical articles internationally. It’s a community of 15000+ tech writers and over 3,000,000 enthusiasts, readers.

Companies such as Adobe, Apple, Alphabet, Google, IBM, Intel, Tesla, Samsung share articles and expertise here. You can write articles on topics such as technology, software, and decentralization with hundreds of different subtopics and publish them.

Hashnode is a global community of programmers. You can share your ongoing projects, stories, ask questions, suggestions, and also give answers to other’s questions. It’s a free platform that helps you to stay connected with the global developers’ community.

You can publish technical blogs or real-life development problems anonymously here. These blogs are shared with all members of the communities, so you can get exposure. Users can follow authors, and tags such as Java, Python, React, JavaScript, CSS, etc.

HackerNews is a social news website for programmers around the world. Developers can share the link of the content, comment, ask questions related to the programming. Best website to learn and grow as a developer. Hundreds of computer science articles are shared every day here.

You can easily share the article by creating an account and submitting it. Readers can upvote and comment on your articles.

FreeCodeCamp is a 100% free nonprofit platform to learn and practice coding. You can learn coding by working on small projects and will get a free certificate from FreeCodeCamp. 40,000+ graduates have gotten jobs in companies including Apple, Amazon, Microsoft, Spotify, etc.

You can use the forum website of FreeCodeCamp where millions of programmers from different countries share thoughts, ideas, problems, and errors to enhance their knowledge.

Thousands of videos on youtube, articles, interactive coding lessons, study groups, and forums help people to learn easily.

An open-source community of 632,417 software developers where coders share, learn, stay up-to-date, help each other, and grow their careers. You can use resources like Podcasts, Articles, FAQs, Videos, News, Real-world examples, and knowledge of others to enhance your coding skills.

This platform covers almost every topic of computer programming such as Angular, React, JavaScript, Python, and CSS. It’s an exceptional website for beginners to learn to code as well as career advice.

CodeProject is a constantly growing community of amazing web developers and programmers with 14,912,384 active members. You can learn by searching articles on topics like Web Development, Artificial Intelligence,  DevOps, Java, .Net, C++, Database, etc. You can even share your knowledge as well.

You can ask questions, write answers, and discuss more than 20 computer programming topics (C#, Web Development, AI, C, C++, Java, JavaScript, DevOps, ASP.Net, IoT, Linux programming, iOS, DataBase) with millions of skilled web developers around the world.

IndieHackers is an emerging online community of world-class web developers. It’s a place where founders of successful startups share their stories, revenue, and experience with other members.

You can learn from the success stories of aspiring entrepreneurs and get connected with the thousands of other founders who are growing their companies.

The website provides a forum where every member can share experience, knowledge, explore ideas, and offer support. To date, 20,000 members have registered to IndieHackers.

Medium is the best medium to improve your knowledge of computer programming, and web development. Almost every tech-savvy used to publish a highly professional piece of content here. You can learn from those thousands of tech articles. You can comment, upvote, subscribe, and connect to the author directly.


We know the internet has connected the whole world, now no one is too far to share knowledge. These communities are supporting the progress of web development and computer programming. You can explore your knowledge of programming with such helpful communities available on the internet.

These aren’t just groups, we can say these are the family of people who appreciate other work, helps to grow, and make the web a better place to learn.

Source link