Building a RESTful Service Using ASP.NET Core and dotConnect...
Strategy

Building a RESTful Service Using ASP.NET Core and dotConnect…


The term REST is an abbreviation for Representational State Transfer. It is a software architectural style created to assist the design and development of the World Wide Web architecture. REST defines a set of constraints that define how a distributed hypermedia system, such as the Web, should be architected. Restful Web Services are HTTP-based, simple, lightweight, fast, scalable, and maintainable services that adhere to the REST architectural style.

The REST architectural style views data and functionality as resources accessed via Uniform Resource Identifiers (URIs). Restful architecture is a client-server paradigm that utilizes a stateless communication protocol, often HTTP, for data exchange between server and client. In REST, the clients and servers interact through a defined and standardized interface.

This article looks at RESTful architecture and how we can implement a RESTful service using ASP.NET Core and dotConnect for PostgreSQL. In this article, we’ll connect to PostgreSQL using dotConnect for PostgreSQL which is high performance and enhanced data provider for PostgreSQL that is built on top of ADO.NET and can work on both connected and disconnected modes.

Prerequisites

To be able to work with the code examples demonstrated in this article, you should have the following installed in your system:

  • Visual Studio 2019 Community Edition
  • PostgreSQL
  • dotConnect for PostgreSQL

You can download .NET Core from here: 

You can download Visual Studio 2019 from here:

You can download PostgreSQL from here:

You can download a trial version of dotConnect for PostgreSQL from here: 

Create the Database

You can create a database using the pgadmin tool. To create a database using this Launch this tool, follow the steps given below:

  1. Launch the pgadmin tool
  2. Expand the Servers section
  3. Select Databases
  4. Right-click and click Create -> Database…
  5. Specify the name of the database and leave the other options to their default values
  6. Click Save to complete the process

Create a Database Table

Select and expand the database you just created

Select Schemas -> Tables

Right-click on Tables and select Create -> Table…

Figure 1

Specify the columns of the table as shown in Figure 2 below:

Figure 2

The table script is given below for your reference:

CREATE TABLE public."Product"

(

    "Id" bigint NOT NULL,

    code character(5) COLLATE pg_catalog."default" NOT NULL,

    name character varying(100) COLLATE pg_catalog."default" NOT NULL,

    "        quantity" bigint NOT NULL,

    CONSTRAINT "Product_pkey" PRIMARY KEY ("Id")

)

We’ll use this database in the subsequent sections of this article to demonstrate how we can work with Postgresql and dotConnect in ASP.NET Core.

Features and Benefits of dotConnect for PostgreSQL

Some of the key features of dotConnect for PostgreSQL include the following:

  • High performance
  • Fully-managed code
  • Seamless deployment
  • Support for the latest version of PostgreSQL
  • Support for .NET Framework, .NET Core and also .NET Compact Framework
  • Support for both connected and disconnected modes
  • Support for all data types of PostgreSQL
  • Improved data binding capabilities
  • Support for monitoring query execution

You can know more on the features of dotConnect for PostgreSQL here. The following are some of the advantages of dotConnect for PostgreSQL:

  • Enables writing efficient and optimized code
  • Comprehensive support for ADO.NET
  • Support for Entity Framework
  • Support for LinqConnect
  • Support for both connected and disconnected modes

Introducing dotConnect for PostgreSQL

dotConnect for PostgreSQL is a high-performance data provider for PostgreSQL built on ADO.NET technology. You can take advantage of the new approaches to building application architecture, boosting productivity and making it easier to create database applications. Formerly known as PostgreSQLDirect.NET, it is an improved data provider for PostgreSQL that provides a comprehensive solution for building PostgreSQL-based database applications.

A scalable data access solution for PostgreSQL, dotConnect for PostgreSQL was designed with a high degree of flexibility in mind. You can use it effectively in WinForms, ASP.NET, ASP.NET Core, two-tier, three-tier, and multi-tier applications. The dotConnect for PostgreSQL data provider may be used as a robust ADO.NET data source or an effective application development framework, depending on the edition you select.

Create a New ASP.NET Core Web API Project in Visual Studio 2019

Once you’ve installed the necessary software and/or tools needed to work with dotConnect for PostgreSQL, follow the steps mentioned in an earlier article “Working with Queries Using Entity Framework Core and Entity Developer” to create a new ASP.NET Core 5.0 project in Visual Studio 2019.

Install NuGet Package(s)

To work with dotConnect for PostgreSQL in ASP.NET Core 5, you should install the following package into your project:

Devart.Data.PostgreSql

You have two options for installing this package: either via the NuGet Package Manager or through the Package Manager Console Window by running the following command.

PM> Install-Package Devart.Data.PostgreSql

Programming dotConnect for PostgreSQL

This section talks about how you can work with dotConnect for PostgreSQL.

Create the Model

Create a class named Product with the following code in there:

public class Product

    {

        public int Id { get; set; }

        public string Code { get; set; }

        public string Name { get; set; }

        public int Quantity { get; set; }

    }

This is our model class which we’ll use for storing and retrieving data.

Create the RESTful Endpoints

Create a new controller class in this project and name it as ProductController. Now replace the generated code with the following code in there:

   [Route("api/[controller]")]

   [ApiController]

    public class ProductController : ControllerBase
    {

        [HttpGet]
        public List<Product> Get()
        {
            throw new NotImplementedException();
        }

        [HttpPost]
        public void Post([FromBody] Product product)
        {
            throw new NotImplementedException();
        }

        [HttpPut]
        public void Put([FromBody] Product product)
        {
            throw new NotImplementedException();
        }
    }

As you can see, there are three RESTful endpoints in the ProductController class. Note the usage of the HTTP verbs in the controller methods. We’ll implement each of the controller methods shortly.

Insert Data Using dotConnect for PostgreSQL

The following code snippet can be used to insert data into the product table of the PostgreSQL database we created earlier:

[HttpPost]

public int Post([FromBody] Product product) {

  try {

    using(PgSqlConnection pgSqlConnection = 

    new PgSqlConnection("User 

      Id=postgres;Password=sa123#;host=localhost;database=postgres;")) {

      using(PgSqlCommand cmd = new PgSqlCommand()) {

        cmd.CommandText = "INSERT INTO public.product 
        (id, code, name, quantity) 
         VALUES (@id, @code,@name, @quantity)"; 

        cmd.Connection = pgSqlConnection;
        cmd.Parameters.AddWithValue("id", product.Id);
        cmd.Parameters.AddWithValue("code", product.Code);
        cmd.Parameters.AddWithValue("name", product.Name);
        cmd.Parameters.AddWithValue("quantity", product.Quantity); 

        if (pgSqlConnection.State != System.Data.ConnectionState.Open) 
            pgSqlConnection.Open();

        return cmd.ExecuteNonQuery();
      }
    }
  }

  catch 
  {
    throw;
  }
}

Read Data Using dotConnect for PostgreSQL

Reading data using dotConnect is fairly straight forward. The following code snippet illustrates how you can read data from the product database table using dotConnect for PostgreSQL.

[HttpGet]

public List <Product> Get() 
{

  try {

    List <Product> products = new List < Product > ();
    using(PgSqlConnection pgSqlConnection = 
     new PgSqlConnection("User 
     Id=postgres;Password=sa123#;host=localhost;database=postgres;")) 
     {

      using(PgSqlCommand pgSqlCommand = new PgSqlCommand()) {
        
        pgSqlCommand.CommandText = "Select * From public.Product";
        pgSqlCommand.Connection = pgSqlConnection; 
        if (pgSqlConnection.State != System.Data.ConnectionState.Open) 
        pgSqlConnection.Open();

        using(PgSqlDataReader pgSqlReader = pgSqlCommand.ExecuteReader()) {
          while (pgSqlReader.Read()) {
            Product product = new Product();
            product.Id = int.Parse(pgSqlReader.GetValue(0).ToString());
            product.Code = pgSqlReader.GetValue(1).ToString();
            product.Name = pgSqlReader.GetValue(2).ToString();
            product.Quantity = 
            int.Parse(pgSqlReader.GetValue(3).ToString()); 
            products.Add(product);
          }
        }
      }
    }
                         
    return products;
  }

  catch 
  {
    throw;
  }
} 

Modify Data Using dotConnect for PostgreSQL

The following code listing illustrates how you can take advantage of dotConnect for PostgreSQL to modify an existing record:

[HttpPut("{id}")]

public void Put([FromBody] Product product) {

  try {

    using(PgSqlConnection pgSqlConnection = 
     new PgSqlConnection("User 
     Id=postgres;Password=sa123#;host=localhost;database=postgres;")) {
     
      using(PgSqlCommand cmd = new PgSqlCommand()) {
        cmd.CommandText = "UPDATE Product SET Name = @name WHERE Id = @id";

        cmd.Parameters.AddWithValue("id", product.Id);
        cmd.Parameters.AddWithValue("name", product.Name);
        cmd.Connection = pgSqlConnection;
        
        if (pgSqlConnection.State != System.Data.ConnectionState.Open) 
           pgSqlConnection.Open();
        cmd.ExecuteNonQuery();
      }
    }
  }

  catch 
  {
    throw;
  }
}

Summary

dotConnect for PostgreSQL is a high-performance object-relational mapper (ORM) for PostgreSQL built on top of the ADO.NET framework. It provides high-performance native connections to the PostgreSQL database. It offers new methods to building application architecture and increases developer productivity.



Source link

Screenshot of an old Twitter preview for GitHub repo links
Strategy

A framework for building Open Graph images


You know that feeling when you make your latest hack project public, and you’re ready to share it with the world? And when you go to Twitter to post a link to your repository, you just see a big picture of yourself? We wanted to make that a better experience.

We recently set about creating a framework and service for automatically generating social sharing images for repositories and other resources on GitHub.

Before the update

Before, when you shared a link to a repository on any social media platform, you’d see something like this:

Screenshot of an old Twitter preview for GitHub repo links

We heard from you that seeing the author’s face was unexpected. Plus, there’s not a lot of quick information here, aside from the plaintext title and description.

We do have custom repository images, and you can still use those to give your project some bespoke branding—but most people don’t upload a custom image for their repositories, so we wanted to create a better default experience for every repo on GitHub.

After the update

Now, we generate a new image for you on-the-fly when you share a link to a repository somewhere:

Screenshot of new Twitter preview card for NASA

We create similar cards for issues, pull requests and commits, with more resources coming soon (like Discussions, Releases and Gists):

Screenshot of open graph Twitter card for a pull request

Open Graph image for a pull request

Screenshot of open graph Twitter card for a commit

Open Graph image for a commit

Screenshot of open graph Twitter card for an issue link

Open Graph image for, you guessed it, an issue

What’s going on behind the scenes? A quick intro to Open Graph

Open Graph is a set of standards for websites to be able to declare metadata that other platforms can pick out, to get a TL;DR of the page. You’d declare something like this in your HTML:

<meta property="og:image" content="https://www.rd.com/wp-content/uploads/2020/01/GettyImages-454238885-scaled.jpg" />

In addition to the image, we also define a number of other meta tags that are used for rendering information outside of GitHub, like og:title and og:description.

When a crawler (like Twitter’s crawling bot, which activates any time you share a link on Twitter) looks at your page, it’ll see those meta tags and grab the image. Then, when that platform shows a preview of your website, it’ll use the information it found. Twitter is one example, but virtually all social platforms use Open Graph to unfurl rich previews for links.

How does the image generator work?

I’ll show you! We’ve leveraged the magic of open source technologies to string some tools together. There are a ton of services that do image templating on-demand, but we wanted to deploy our own within our own infrastructure, to ensure that we have the control we need to generate any kind of image.

So: our custom Open Graph image service is a little Node.js app that uses the GitHub GraphQL API to collect data, generates some HTML from a template, and pipes it to Puppeteer to “take a screenshot” of that HTML. This is not a novel idea—lots of companies and projects (like vercel/og-image) use a similar process to generate an image.

We have a couple of routes that match patterns similar to what you’d find on GitHub.com:

// https://github.com/rails/rails/pull/41080
router.get("/:owner/:repo/pull/:number", generateImageMiddleware(Pull));

// https://github.com/rails/rails/issues/41078
router.get("/:owner/:repo/issues/:number", generateImageMiddleware(Issue));

// https://github.com/rails/rails/commit/2afc9059c9eb509f47d94250be0a917059afa1ae
router.get("/:owner/:repo/commit/:oid", generateImageMiddleware(Commit));

// https://github.com/rails/rails/pull/41080/commits/2afc9059c9eb509f47d94250be0a917059afa1ae
router.get("/:owner/:repo/pull/:number/commits/:oid", generateImageMiddleware(Commit));

// https://github.com/rails/rails/*
router.get("/:owner/:repo*", generateImageMiddleware(Repository));

When our application receives a request that matches one of those routes, we use the GitHub GraphQL API to collect some data based on the route parameters and generate an image using code similar to this:

async function generateImage(template, templateData) {
 // Render some HTML from the relevant template
 const html = compileTemplate(template, templateData);
 
 // Create a new page
 const page = await browser.newPage();
 
 // Set the content to our rendered HTML
 await page.setContent(html, { waitUntil: "networkIdle0" });
 
 const screenshotBuffer = await page.screenshot({
   fullPage: false,
   type: "png",
 });
 
 await page.close();
 
 return screenshotBuffer;
}

Some performance gotchas

Puppeteer can be really slow—it’s launching an entire Chrome browser, so some slowness is to be expected. But we quickly saw some performance problems that we just couldn’t live with. Here are a couple of things we did to significantly improve performance of image generation:

waitUntil: networkIdle0 is aggressively patient, so we replaced it

One Saturday night, I was generating and digging through Chromium traces, as one does, to determine why this service was so slow. I dug into these traces with the help of Electron maintainer and semicolon enthusiast @MarshallOfSound. We discovered a huge, two-second block of idle time (in pink):

Screenshot showing two seconds of idle time in Chromium trace

That’s a trace of everything between browser.newPage() and page.close(). The giant pink bar is “idle time,” and (through trial and error) we determined that this was the waitUntil: networkidle0 option passed to page.setContent(). We needed to set this option to say “only continue once all images, fonts, etc have finished loading,” so that we don’t take screenshots before the pages are actually ready. However, it seemed to add a significant amount of idle time, despite the page being ready for a screenshot 300ms in. Per networkidle0‘s docs:

networkidle0 – consider setting content to be finished when there are no more than 0 network connections for at least 500 ms.

We deduced that that big pink block was due to Puppeteer’s backoff time, where it waits 500ms before considering all network connections complete; but the numbers didn’t really line up. That pink bar shouldn’t be nearly that big, at around two seconds instead of the expected 500-ish milliseconds.

So, how did we fix it? Well, we want to wait until all images/fonts have loaded, but clearly Puppeteer’s method of doing so was a little greedy. It’s hard to see in a still image, but the below screenshot shows that all images have been decoded and render by about ~115ms into the trace:

Screenshot showing images decoded and rendered

All we had to do was provide Puppeteer with a different heuristic to know when the page was “done” and ready for a screenshot. Here’s what we came up with:

   // Set the content to our rendered HTML
   await page.setContent(html, { waitUntil: "domcontentloaded" });
 
   // Wait until all images and fonts have loaded
   await page.evaluate(async () => {
     const selectors = Array.from(document.querySelectorAll("img"));
     await Promise.all([
       document.fonts.ready,
       ...selectors.map((img) => {
         // Image has already finished loading, let’s see if it worked
         if (img.complete) {
           // Image loaded and has presence
           if (img.naturalHeight !== 0) return;
           // Image failed, so it has no height
           throw new Error("Image failed to load");
         }
         // Image hasn’t loaded yet, added an event listener to know when it does
         return new Promise((resolve, reject) => {
           img.addEventListener("load", resolve);
           img.addEventListener("error", reject);
         });
       }),
     ]);
   });

This isn’t magic—it’s standard DOM practices. But it was a much better solution for our use-case than the abstraction provided by Puppeteer. We changed waitUntil to domcontentloaded to ensure that the HTML had finished being parsed, then passed a custom function to page.evaluate. This gets run in the context of the page itself but pipes the return value to the outer context. This meant that we could listen for image load events and pause execution until the Promises have been resolved.

You can see the difference in our performance graphs (going from ~2.25 seconds to ~600ms):

Screenshot of difference in performance graphs, difference in our performance graphs, going from ~2.25 seconds to ~600ms

Double your rendering speed with 1mb of memory

More memory means more speed, right? Sure! At GitHub, when we deploy a new service to our internal Kubernetes infrastructure, it gets a default amount of memory: 512MB (technically MiB, but who’s counting?). When we were scaling this service to be enabled for 100% of repositories, we wanted to increase our resource limits to ensure we didn’t see any performance degradations as the service saw more traffic. What we didn’t know was that 512MB was a magic number – and that setting our memory limit to at least 1MB more would unlock significantly better performance within Chromium.

When we bumped that limit, we saw this change:

Graph showing reduction in time to generate image

In production, that was a reduction of almost 500ms to generate an image. It stands to reason that more memory will be “faster” but not that much without any increase in traffic—so what happened? Well, it turns out that Chromium has a flag for devices with less than 512MB of memory and considers these low-spec devices. Chromium uses that flag to run some processes sequentially instead of in parallel, to improve reliability at the cost of performance on devices that couldn’t support increased performance anyway. If you’re interested in running a service like this on your own, check to see if you can bump the memory limit past 512MB – the results are pretty great!

Stats

Generating an image takes 280ms on average. We could go even lower if we wanted to make some other changes, like generating a JPEG instead of a PNG.

The image generator service generates around two million unique-ish images per day. We also return a cached image for 40% of the total requests.

And that’s it! I hope you’re enjoying these images in your Twitter feeds. I know it’s made mine a lot more colorful. If you have any questions or comments, feel free to ping me on Twitter: @JasonEtco!





Source link

Developer-Friendly Passwordless Auth | CSS-Tricks
Strategy

Developer-Friendly Passwordless Auth | CSS-Tricks


I’d wager to say that most websites that are business-minded have accounts. A way to log into them. Social media sites, eCommerce sites, CMS systems, you name it, having accounts people log into is at the heart of them. So… make it good. That’s what Magic does (great name!).

Have you heard that language used in a sign-in system like “email me a magic link to sign in”? Well, now you know what can power it. But Magic isn’t just that, it’s all types of auth, including social logins and WebAuthn. Magic is a developer SDK that enables passwordless login in all these methods.

Magic is for teams of any size. Upon signing up, you’ll get $85 in credit which covers 10,000 logins, and each login is $0.0085 after that. That kind of pricing makes it extremely affordable for apps of any size. Small apps will have tiny (or no) bill, and by the time you have tens or hundreds of thousands of users, the cost will feel negligible. Especially considering all the time you saved by not rolling auth from scratch.

Why Magic? What does it offer?

Magic appeals to developers because:

  1. Superior developer experience. It’s easy to use and it’s fast to implement.
  2. Metered pricing — only pay for what you need. Also save money by avoiding the technical debt of your own auth.
  3. The ability to adapt to future authentication methods. Auth is always evolving.
  4. Don’t have to to deal with passwords — less security concerns.
  5. Next-gen security infastructure.

I really like all those, but especially #3. I think of it like image CDNs that offer optimization. The world of images is always evolving as well, and a good image CDN will evolve to support the latest formats and optimization techniques without any work on your end. So too with Magic and Auth.

The “J” and the “a” in Jamstack originally referred to “JavaScript” and “APIs”, which is exactly what Magic offers. Magic fits the Jamstack model very nicely. No server? No problem. Even though Magic absolutely has server-side offerings, and Jamstack could use things like cloud functions, you can get auth done entirely client-side if you’d like. Here’s a great (quick!) tutorial on that.

Here’s the most important thing though: Great UX. Users really like it when the auth of an app feels easy and is never a blocker for them using your app. That’s gonna help your conversion rates.

How do you implement Magic?

First, you need an account. I found it satisfying, of course, that they dog food their own auth signup process, giving you a taste for what you can have right away.

From here, you can scaffold an app out super quickly. The great DX continues here as they offer a way to scaffold out a working app right off the bat:

That’s a web-based starter, for which they have docs, examples, and live demos.

I was able to port a demo over to CodePen Projects super quickly. Check it out!

That’s just a client-side web version. The core of it is really this simple:

import { Magic } from 'magic-sdk'

const m = new Magic(API_KEY)
m.auth.loginWithMagicLink('[email protected]')

They’ve got server-side support for Node, Python, Ruby, PHP and Go. Magic is for apps of any scale, including incredibly security-sensitive apps. For example, you can even use client-side auth but then use AWS services, with their Hardware Security Modules (HSMs) and all.

Magic has SDK’s for React Native, iOS, Android, and of course native web. Then in addition to the email magic link style signup, they have social login support for Google, Facebook, Apple, GitHub, GitLab, Bitbucket, Linkedin, Twitter, and Discord. Phew! That’s a lot of support for a lot of things. Magic has you covered.

While I was plucking away with this and logging in myself, I could see all the action on my dashboard.

No Passwords

It’s notable that with Magic, there are literally no passwords. Magic email link flow means users need no passwords, and with social logins, users only need to be logged into that other service, not remember/save a password unique to your app. That’s the Magic thesis, which they spell out clearly in Passwords Suck:

Using passwords is a nightmare. No one wants to memorize yet another passphrase when our heads are already filled with them. Passwords are a huge vector for security breaches precisely because they place the burden of choosing unique and secure secrets on the user, who just can’t be bothered. We end up having one password for all the important stuff like banking, work, and school, one for the social-medias, and one for all the miscellaneous one-off services we don’t care too much about. The result is that a whopping 59% of people reuse their passwords across services, which means a leak anywhere quickly becomes a liability for the whole web.

Going password-less is good for users and good for the web.

Get Started

I’d encourage you to check it out. You can sign up for free, no credit card required, and if you do that today you’ll get 10,000 free logins on your account to try out. If you love it, and you have fellow industry folks you refer to Magic, you get 3,000 bonus logins — up to 90,000 in total.



Source link

New Project Folder
Strategy

Easy Way To Set Up JavaScript Automation Framework


What is TestCafe?

TestCafe is a non-selenium-based open-source JavaScript End to End Automation Testing Framework built with NodeJS. TestCafe supports JavaScript, CoffeeScript, and TypeScript.

TestCafe is nowadays very popular since it is very stable and follows an easy setup.TestCafe does not depend on Selenium or other testing software. TestCafe runs on the popular Node.js platform and makes use of the browsers that you already have.

TestCafe Supports Javascript, Typescript, and CoffeeScript with no additional setup. Additionally, TestCafe Automatically compiles JavaScript, TypeScript, CoffeeScript no need to compile manually.

In this tutorial, We are creating a TypeScript file with a .ts extension. If you want to use JavaScript just create test scripts with .js extension and follow Javascript standards. There is no additional setup required

Features of Testcafe

Easy setup: Compared to any other automation tool in the market, Testcafe setup is very easy and quick. Those who know the basics can do it on their own.

No Third-Party Dependency: Test café doesn’t depend on any third-party libraries like web-driver, or external jars, etc.

Easy to Write Tests: Test cafes command chaining techniques makes teams more productive. the usual 20 lines of code in other frameworks can be written in just 10 to 12 lines of code by using Testcafe syntax.

Fast and Stable: Because a test is executed inside a browser the tests are fast compared to other frameworks and its tests are stable as events are simulated internally using Javascript.

Multiple Tab/Windows Support: Unlike Cypress, Testcafe provides functionality like the switch to a window or multiple tab support.

Iframe Support: Testcafe supports Iframes and you can switch to and from iFrame in your tests.

Parallel Testing: With concurrency mode enabled, the Testcafe tests can be run parallelly.

Automated Waiting: Test Cafe waits automatically for elements to appear, there is no need to put external waits.

Cross Browser Testing: Testcafe supports all major browsers like old and new Edge, Firefox, IE, and all Chrome family browsers.

Step by Step Guide to Configure/Setup TestCafe JavaScript / Typescript Automation Framework

Prerequisites

  1. Install NodeJS: If you don’t have NodeJS installed in your system navigate to https://nodejs.org/en/download/ and choose LTS download and install.
  2. Install Visual Studio Code: If you don’t have Visual Studio Code on your computer navigate to https://code.visualstudio.com/download to download and install. 

As I mentioned earlier, both JavaScript and Typescript framework setup remains the same, the only thing you need to do is create test script and page object as JavaScript files instead of Typescript file. You need to follow JavaScript standards in case you are using JavaScript and TestCafe.

In this tutorial, we are using TestCafe and Typescript

Step 1: Create a New Project Folder

Navigate to your desired directory and create a new Project Folder (ex: TestCafeFramework)

New Project Folder

Step 2: Open Project Folder in Visual Studio Code IDE

Since we are using Visual Studio Code as IDE in this tutorial, Open the Project Folder in Visual Studio Code

Visual Studio Code > File menu > Open Folder > Choose newly created Project Folder (ex: TestCafeFramework)

Project Folder in Visual Studio Code IDE

Step 3: Create a Package.json File

Package.json helps in anyways, It tracks all dependencies installed, allows us to create a shortcut for running tests, etc. 

To create pacakge.json file, From Visual Studio Code > Terminal > New Terminal 

Enter the below command

npm init

Once you entered the above command in Terminal, It will ask for the set of questions, You can just hit the [Enter] key or you can type the desired value if you wish.

Entering npm init Command

After the successful execution of the npm init command, you can see the package.json created in your Project Folder.

Executing npm init Command

Step 4: Install TestCafe using NPM

Testcafe can be installed as npm package, using npm install command we can install testcafe. 

To use Visual Studio Code Terminal, enter the below command to install TestCafe npm package on your system.

npm install --save-dev testcafe

Using Visual Studio Code Terminal

After successful installation of the above command, the folder named node_modules should be created in your Project Folder.

Step 5: Create Folders to Manage Page Objects and Specs

We are creating the page object model in this tutorial so we need to create separate folders for specs and page objects, this folder structure helps us to manage the tests and selectors in an easy way.

We are creating 3 folders.

Tests: This is the main folder, which will have subfolders named pages and specs

Pages: This folder contains all the page object files needed for your project

Specs: This folder contains all the test scripts 

The directory structure will look like the following:

TestCafeFramework
-tests
--pages
--specs

Create Test Folder Under The Project Folder

Creating Test Folder Under The Project Folder

Create Two Folders Inside The tests Folder and Name Them As pages and specs

Creating Two Folders Inside The tests Folder

Step 6: Create your first TestCafe page object file

Under pages folder, create a new file and name it as example-page.ts, This is our page-object file, which will contain all selectors that we need for our TestCafe Automation Scripts

Note: Since we are creating a Typescript project, we are creating the file with .ts extension but if you want to create a Javascript project, create the file with .js extension

Creating your first TestCafe page object file

Step 7: Add Page Object Selectors to Your Page-Object File

In the above step, we have created page-object file example.page.ts, its time to add some selectors.

In this tutorial, we are going to create a sample TestCafe Project with the following  test case:

  • Navigate to google.com
  • Search for text TestCafe
  • Clicks on the first Result with link text having “Testcafe”
  • Ensures home page is loaded

Copy and paste the below code snippet into example.page.ts:

//example.page.ts
import { Selector } from 'testcafe';
class GoogleSearch{
   get searchInput() {return Selector("input[name='q']")}
   get searchResult(){return Selector("h3").withText('TestCafe')}
   get homepageLogo(){return Selector('div[class="logo"]')}
}
export default new GoogleSearch

Adding Page Object Selectors to Your Page-Object File

Step 8: Create The First Test Script File For Your TestCafe Automation Framework

Inside the specs, folder create a new file with the name example-spec.ts

example-spec.ts contains your actual code of tests like navigating to the desired URL, performing some of the actions, etc.

Creating First Test Script File For TestCafe Automation Framework

Step 9: Create Your First Test Script With TestCafe

We have created a spec file and we have already discussed test cases or automation scripts we are going to write. 

Copy and Paste the below code snippet into your example-spec.ts

import search from "../pages/example-page";
fixture`Google Search Demo`
  .page`https://google.com`;
test('Google Search Validation', async t => {
  await t
    .typeText(search.searchInput, "TestCafe")
    .pressKey('enter')
    .click(search.searchResult)
    .expect((search.homepageLogo.visible)).eql(true);
});

First Test Script With TestCafe

Step 10: Execute or Run your TestCafe tests

TestCafe runs on the browser specified in the command, So if you want to run your tests on chrome just use the below command and 

npx testcafe chrome

Running TestCafe Tests

Once you execute the above command your tests start running in chrome browser using TestCafe and results will be displayed in the console.

How To Execute Single Tests in TestCafe?

If you want to execute single using TestCafe you just need to specify the file name in the command line like below:

npx testcafe chrome ./tests/specs/example-spec.ts

With that, you have successfully built the TestCafe Framework from Scratch! Hope this helps.



Source link

r/graphic_design - What kind of bookbinding and assembly did they do for this blu ray set?
Strategy

What kind of bookbinding and assembly did they do for this b…


r/graphic_design - What kind of bookbinding and assembly did they do for this blu ray set?

So was browsing Amazon, and I came across Avatar The Last Airbender 15th edition series box. Unlike the other ones that just do the standard, this one is actually like a book you can put on the shelf. A fan of physical media and this show, I bought it. I have actually been trying to figure out how to make my own box version, each season getting their own book. The main problem has been the assembly of the box, mainly with the pages holding the disc and holding them so they arent loose in the sleeve. Looking at this, it folds all the way horizontal, no gap. And looking at the disc, they are held nice and firm in there. Like it says in the title, what’s the assembly process for this thing? Or is there a resource for different DVD assembly or for book/disc ?



Source link

r/webdev - Website Layout Breaks in Updated Chromium Browsers (flexbox)
Strategy

Website Layout Breaks in Updated Chromium Browsers (flexbox)…


Layout of website breaks after updating Chrome from version 90 to 92. It also displays “incorrectly” in Edge and Brave browsers. In Firefox, it still appears “correctly” as it does on the left.

The boxes utilize flexbox for layout. The widths of the images and their parent containers should be determined by their natural width with the height set at 50% of its parent container in this configuration. The old engine seems to respect the width of the child and naturally shrink-wrap the parent container to the smallest possible width without going any narrower. The right hand side of each card/box are expanded using flex: 1;. There is no flex property nor width assigned to the left hand side of the card containing the icon and “learn more” button. I have tried using width: auto;, width: min-content;, and width: max-content;on the left box in the card, but this yields the same results.

To further complicate the issue, I don’t see any reason that the CSS selectors should single out the last box and make it any different. The problem “goes away” when I set an explicit width in pixels for the image container, but this is not a practical solution, as the layout is dependent on the width following naturally/automatically from the height of the image being 50% of the height of the parent container.

Again, it still looks fine in Firefox and Brave in Linux (but not Brave in Windows). Anyone know what might be going on here and how one might approach fixing this? (without some JavaScript hack that I am tempted to employ)

r/webdev - Website Layout Breaks in Updated Chromium Browsers (flexbox)



Source link

Working With Transactions in Entity Framework Core and Entit...
Strategy

Working With Transactions in Entity Framework Core and Entit…


Entity Framework Core, a lightweight cross-platform version of the Entity Framework, gives you a standard method to retrieve data from various data sources using the Entity Framework. It supports working with transactions as well, so you can create and manage transactions elegantly.

This article presents a discussion on how we can work with transactions using Entity Framework Core and Entity Developer for data access.

Prerequisites

To be able to work with the code examples demonstrated in this article, you should have the following installed in your system:

  • Visual Studio 2019 Community Edition or higher
  • SQL Server 2019 Developer Edition or higher
  • Entity Developer from Devart

You can download Visual Studio 2019 from here.

You can download SQL Server 2019 Developer Edition from here.

You can download a copy of Entity Developer (trial version) from here.

Creating the Database

Let’s first create the database. We’ll use a database with a few tables. Create a database named Test and then run the following script to create tables in this database:

CREATE TABLE [dbo].[Customer](

      [Id] [bigint] IDENTITY(1,1) NOT NULL,
      [CustomerName] [nvarchar](max) NOT NULL,
      [CustomerEmail] [nvarchar](max) NOT NULL,
      [CustomerPhone] [nvarchar](max) NOT NULL,
  
      CONSTRAINT [PK_Customer] PRIMARY KEY CLUSTERED 
      (
            [Id] ASC
      )

) ON [PRIMARY]

GO

 
CREATE TABLE [dbo].[Product](
      [Id] [bigint] IDENTITY(1,1) NOT NULL,
      [ProductName] [varchar](max) NULL,
      [Description] [nvarchar](max) NULL,
      [Quantity] [bigint] NULL,
      [Price] [decimal](18, 2) NULL,

      CONSTRAINT [PK_Product] PRIMARY KEY CLUSTERED 
      (
            [Id] ASC
      )
)

CREATE TABLE [dbo].[Order](

      [Id] [bigint] IDENTITY(1,1) NOT NULL,
      [OrderNumber] [nvarchar](max) NULL,
      [OrderDate] [datetime] NULL,
      [OrderQuantity] [int] NULL,
      [CustomerId] [bigint] NULL,
      [ProductId] [bigint] NULL,

      CONSTRAINT [PK_Orders] PRIMARY KEY CLUSTERED 
      (
            [Id] ASC
      )
)

GO

Next, you can run the following script to add foreign key constraints to the Order table.

ALTER TABLE [dbo].[Order]  WITH CHECK ADD CONSTRAINT [FK_Orders_Customers] FOREIGN KEY([CustomerId])

REFERENCES [dbo].[Customer] ([Id])

GO

ALTER TABLE [dbo].[Order] CHECK CONSTRAINT [FK_Orders_Customers]

GO

ALTER TABLE [dbo].[Order]  WITH CHECK ADD CONSTRAINT [FK_Orders_Product] FOREIGN KEY([ProductId])

REFERENCES [dbo].[Product] ([Id])

GO

ALTER TABLE [dbo].[Order] CHECK CONSTRAINT [FK_Orders_Product]

GO

When you execute these scripts, the three database tables Customer, Product and Order will be created together with the relationships. Figure 1 below shows the table design:

 

Figure 1

Follow the steps mentioned in an earlier article “Working With Queries Using Entity Framework Core and Entity Developer” to create a new ASP.NET Core 5.0 project in Visual Studio. Create a new model by following the steps mentioned there as well.

Once you’ve created a new ASP.NET Core 5.0 project and an Entity Data Model using Entity Developer, here’s how your Entity Data Model would look like in the design view.

Figure 2

Why Do We Need Transactions?

Transactions allow for the atomic execution of multiple database operations in a single batch. The statements as part of the transaction are applied successfully to the database when the transaction is committed. Consequently, if any of the updates fail, the transaction is rolled back, and none of those modifications are committed to the database. As an example, when you place an order for an item say having an order quantity 100, the same quantity should be deducted from the Product table and the current stock quantity should be updated.

Working With Transactions

This section talks about how transactions can be used to execute interdependent transactions in a batch. Assuming that you’ve a DataContext instance named dbContext, you can use the following code to retrieve a connection instance from the data context instance:

var connection = dbContext.Database.GetDbConnection();

You can then check if the connection state is open and open the connection if it is not already open as shown in the code snippet given below:

if (connection.State != System.Data.ConnectionState.Open) 

 dbContext.Database.GetDbConnection().Open(); 

}

However, none of the above statements are required in EntityFrameworkCore (unless you need to write some custom code to retrieve the connection properties, etc) since a connection instance is created and opened automatically when you create an instance of your data context.

You can write the following code to execute a couple of statements as a batch in a transactional way:

using var dbContext = new TestModel();            

using var transaction = dbContext.Database.BeginTransaction();

try
{

     Order order = new Order();
     order.OrderNumber = "Ord-2021-003";
     order.ProductId = 1;
     order.CustomerId = 2;
     order.OrderDate = DateTime.Today;
     order.OrderQuantity = 100;
     dbContext.Add(order);
  
     dbContext.SaveChanges(); 

     Product product = dbContext.Products.Where(p => p.Id ==
     order.ProductId).First();
     product.Quantity -= 100;

     dbContext.SaveChanges();
     transaction.Commit();
}

 catch (Exception ex)
{
    transaction.Rollback();
    _logger.LogError(ex.Message);
    throw;
}

finally
{
     transaction.Dispose();
}

If the database supports transactions, all changes you’ve made in the entities and then called the SaveChanges method are applied to the underlying database. If an error occurs, SaveChanges will guarantee that the operation will either entirely succeed or that the database will be completely unaffected by the error.

When you call the SaveChanges method, and a transaction is already in progress, Entity Framework Core automatically creates a savepoint before any data is saved onto the database. The savepoint is automatically created by EF when SaveChanges is called, and a transaction is currently in process on the context. Savepoints are points in a database transaction to which a transaction may be rolled back if an error occurs or for any cause other than the transaction itself. The transaction is immediately rolled back to the savepoint if SaveChanges experiences an error; otherwise, it is left in the same condition as if the transaction had never begun in the first place.

The BeginTransaction method creates and starts a new transaction object. It also returns this newly created transaction instance. You can take advantage of the DbContext.Database.UseTransaction() method to use an existing transaction instance that has been created out of scope of the context object.

Working With TransactionScope

When dealing with transactions, you can take advantage of the TransactionScope class. This class provides an elegant way to mark a code snippet as taking part in a transaction without you having to interact with the transaction.

TransactionScope is adept at managing ambient transactions automatically. A transaction scope can pick and handle the ambient transactions automatically. If you are building a transaction application, it is strongly advised that you use the TransactionScope class because of its simplicity and efficiency.

When using TransactionScope, you can use transactions with multiple databases or a single database with several connection strings. When you’re using TransactionScope, you can handle local as well as distributed transactions seamlessly.

The following code snippet illustrates a sample structure that you should follow when working with Transaction Scope in your applications:

try
{
    using (TransactionScope scope = new TransactionScope())
    {

        //Perform first database operation
        //Perform second database operation
        //...
         scope.Complete();
    }
}

catch (Exception ex)
{
    //Write your code here to handle exception
}

The following code listing uses the above structure and then illustrates how you can work with transaction scope:

try
{
      using (TransactionScope transactionScope = new TransactionScope())
      {
             using (var dbContext = new TestModel())
                    {

                        Order order = new Order();
                        order.OrderNumber = "Ord-2021-003";
                        order.ProductId = 1;
                        order.CustomerId = 2;

                        order.OrderDate = DateTime.Today;
                        order.OrderQuantity = 100;
                        dbContext.Add(order);

                        dbContext.SaveChanges(); 

                        Product product = dbContext.Products.Where
                        (p => p.Id ==
                        order.ProductId).First();
                        product.Quantity -= 100;
               
                        dbContext.SaveChanges();
             }
          transactionScope.Complete();
       }
}

catch (Exception ex)
{
     _logger.LogError(ex.Message);
     throw;
}

Here I’ve shown you how to work with TransactionScope using a single database.

Summary

It is a good practice not to run transactions for a long time, i.e., you should avoid using transactions if the statements to be executed by the transaction are long-running. Transactions that are dependent on user input to proceed can degrade the performance of your application.



Source link

Screenshot collage - a 2x2 grid. The first one shows the items of a full-screen navigation sliding down with a delay that
Strategy

Using Absolute Value, Sign, Rounding and Modulo in CSS Today


For quite a while now, the CSS spec has included a lot of really useful mathematical functions, such as trigonometric functions (sin(), cos(), tan(), asin(), acos(), atan(), atan2()), exponential functions (pow(), exp(), sqrt(), log(), hypot()), sign-related functions (abs(), sign()) and stepped value functions (round(), mod(), rem()).

However, these are not yet implemented in any browser, so this article is going to show how, using CSS features we already have, we can compute the values that abs(), sign(), round() and mod() should return. And then we’ll see what cool things this allows us to build today.

Screenshot collage - a 2x2 grid. The first one shows the items of a full-screen navigation sliding down with a delay that's proportional to the distance to the selected one. The second one shows a cube with each face made of neon tiles; these tiles shrink and go inwards, into the cube, with a delay that depends on the distance from the midlines of the top face. The third one is a time progress with a tooltip showing the elapsed time in a mm::ss format. The fourth one is a 3D rotating musical toy with wooden and metallic stars and a wooden crescent moon hanging from the top.
A few of the things these functions allow us to make.

Note that none of these techniques were ever meant to work in browsers from back in the days when dinosaurs roamed the internet. Some techniques depend on the browser supporting the ability to register custom properties (using @property), which means they’re limited to Chromium for now.

The computed equivalents

--abs

We can get this by using the new CSS max() function, which is already implemented in the current versions of all major browsers.

Let’s say we have a custom property, --a. We don’t know whether this is positive or negative and we want to get its absolute value. We do this by picking the maximum between this value and its additive inverse:

--abs: max(var(--a), -1*var(--a));

If --a is positive, this means it’s greater than zero, and multiplying it with -1 gives us a negative number, which is always smaller than zero. That, in turn, is always smaller than the positive --a, so the result returned by max() is equal to var(--a).

If --a is negative, this means it’s smaller than zero, and that multiplying it by -1 gives us a positive number, which is always bigger than zero, which, in turn, is always bigger than the negative --a. So, the result returned by max() is equal to -1*var(--a).

--sign

This is something we can get using the previous section as the sign of a number is that number divided by its absolute value:

--abs: max(var(--a), -1*var(--a));
--sign: calc(var(--a)/var(--abs));

A very important thing to note here is that this only works if --a is unitless, as we cannot divide by a number with a unit inside calc().

Also, if --a is 0, this solution works only if we register --sign (this is only supported in Chromium browsers at this point) with an initial-value of 0:

@property --sign {
  syntax: '<integer>';
  initial-value: 0;
  inherits: false /* or true depending on context */
}

This is because --a, being 0, also makes --abs compute to 0 — and dividing by 0 is invalid in CSS calc() — so we need to make sure --sign gets reset to 0 in this situation. Keep in mind that this does not happen if we simply set it to 0 in the CSS prior to setting it to the calc() value and we don’t register it:

--abs: max(var(--a), -1*var(--a));
--sign: 0; /* doesn't help */
--sign: calc(var(--a)/var(--abs));

In practice, I’ve also often used the following version for integers:

--sign: clamp(-1, var(--a), 1);

Here, we’re using a clamp() function. This takes three arguments: a minimum allowed value -1, a preferred value var(--a) and a maximum allowed value, 1. The value returned is the preferred value as long as it’s between the lower and upper bounds and the limit that gets exceeded otherwise.

If --a is a negative integer, this means it’s smaller or equal to -1, the lower bound (or the minimum allowed value) of our clamp() function, so the value returned is -1. If it’s a positive integer, this means it’s greater or equal to 1, the upper bound (or the maximum allowed value) of the clamp() function, so the value returned is 1. And finally, if --a is 0, it’s between the lower and upper limits, so the function returns its value (0 in this case).

This method has the advantage of being simpler without requiring Houdini support. That said, note that it only works for unitless values (comparing a length or an angle value with integers like ±1 is like comparing apples and oranges — it doesn’t work!) that are either exactly 0 or at least as big as 1 in absolute value. For a subunitary value, like -.05, our method above fails, as the value returned is -.05, not -1!

My first thought was that we can extend this technique to subunitary values by introducing a limit value that’s smaller than the smallest non-zero value we know --a can possibly take. For example, let’s say our limit is .000001 — this would allow us to correctly get -1 as the sign for -.05, and 1 as the sign for .0001!

--lim: .000001;
--sign: clamp(-1*var(--lim), var(--a), var(--lim));

Temani Afif suggested a simpler version that would multiply --a by a very large number in order to produce a superunitary value.

--sign: clamp(-1, var(--a)*10000, 1);

I eventually settled on dividing --a by the limit value because it just feels a bit more intuitive to see what minimum non-zero value it won’t go below.

--lim: .000001;
--sign: clamp(-1, var(--a)/var(--lim), 1);

--round (as well as --ceil and --floor)

This is one I was stuck on for a while until I got a clever suggestion for a similar problem from Christian Schaefer. Just like the case of the sign, this only works on unitless values and requires registering the --round variable as an <integer> so that we force rounding on whatever value we set it to:

@property --round {
  syntax: '<integer>';
  initial-value: 0;
  inherits: false /* or true depending on context */
}

.my-elem { --round: var(--a); }

By extension, we can get --floor and --ceil if we subtract or add .5:

@property --floor {
  syntax: '<integer>';
  initial-value: 0;
  inherits: false /* or true depending on context */
}

@property --ceil {
  syntax: '<integer>';
  initial-value: 0;
  inherits: false /* or true depending on context */
}

.my-elem {
  --floor: calc(var(--a) - .5);
  --ceil: calc(var(--a) + .5)
}

--mod

This builds on the --floor technique in order to get an integer quotient, which then allows us to get the modulo value. This means that both our values must be unitless.

@property --floor {
  syntax: '<integer>';
  initial-value: 0;
  inherits: false /* or true depending on context */
}

.my-elem {
  --floor: calc(var(--a)/var(--b) - .5);
  --mod: calc(var(--a) - var(--b)*var(--floor))
}

Use cases

What sort of things can we do with the technique? Let’s take a good look at three use cases.

Effortless symmetry in staggered animations (and not only!)

While the absolute value can help us get symmetrical results for a lot of properties, animation-delay and transition-delay are the ones where I’ve been using it the most, so let’s see some examples of that!

We put --n items within a container, each of these items having an index --i. Both --n and --i are variables we pass to the CSS via style attributes.

- let n = 16;

.wrap(style=`--n: ${n}`)
  - for(let i = 0; i < n; i++)
    .item(style=`--i: ${i}`)

This gives us the following compiled HTML:

<div class='wrap' style='--n: 16'>
  <div class='item' style='--i: 0'></div>
  <div class='item' style='--i: 1'></div>
  <!-- more such items -->
</div>

We set a few styles such that the items are laid out in a row and are square with a non-zero edge length:

$r: 2.5vw;

.wrap {
  display: flex;
  justify-content: space-evenly;
}

.item { padding: $r; }
Screenshot showing the items lined in a row and DevTools with the HTML structure and the styles applied.
The result so far.

Now we add two sets of keyframes to animate a scaling transform and a box-shadow. The first set of keyframes, grow, makes our items scale up from nothing at 0% to full size at 50%, after which they stay at their full size until the end. The second set of keyframes, melt, shows us the items having inset box shadows that cover them fully up to the midway point in the animation (at 50%). That’s also when the items reach full size after growing from nothing. Then the spread radius of these inset shadows shrinks until it gets down to nothing at 100%.

$r: 2.5vw;

.item {
  padding: $r;
  animation: a $t infinite;
  animation-name: grow, melt;
}

@keyframes grow {
  0% { transform: scale(0); }
  50%, 100% { transform: none; }
}

@keyframes melt {
  0%, 50% { box-shadow: inset 0 0 0 $r; }
  100% { box-shadow: inset 0 0; }
}
Animated gif. Shows 16 black square tiles in a row growing from nothing to full size, then melting from the inside until they disappear. The cycle then repeats. In this case, all tiles animate at the same time.
The base animation (live demo).

Now comes the interesting part! We compute the middle between the index of the first item and that of the last one. This is the arithmetic mean of the two (since our indices are zero-based, the first and last are 0 and n - 1 respectively):

--m: calc(.5*(var(--n) - 1));

We get the absolute value, --abs, of the difference between this middle, --m, and the item index, --i, then use it to compute the animation-delay:

--abs: max(var(--m) - var(--i), var(--i) - var(--m));
animation: a $t calc(var(--abs)/var(--m)*#{$t}) infinite backwards;
animation-name: grow, melt;

The absolute value ,--abs, of the difference between the middle, --m, and the item index, --i, can be as small as 0 (for the middle item, if --n is odd) and as big as --m (for the end items). This means dividing it by --m always gives us a value in the [0, 1] interval, which we then multiply with the animation duration $t to ensure every item has a delay between 0s and the animation-duration.

Note that we’ve also set animation-fill-mode to backwards. Since most items will start the animations later, this tells the browser to keep them with the styles in the 0% keyframes until then.

In this particular case, we wouldn’t see any difference without it either because, while the items would be at full size (not scaled to nothing like in the 0% keyframe of the grow animation), they would also have no box-shadow until they start animating. However, in a lot of other cases, it does make a difference and we shouldn’t forget about it.

Another possibility (one that doesn’t involve setting the animation-fill-mode) would be to ensure the animation-delay is always smaller or at most equal to 0 by subtracting a full animation-duration out of it.

--abs: max(var(--m) - var(--i), var(--i) - var(--m));
animation: a $t calc((var(--abs)/var(--m) - 1)*#{$t}) infinite;
animation-name: grow, melt;

Both options are valid, and which one you use depends on what you prefer to happen at the very beginning. I generally tend to go for negative delays because they make more sense when recording the looping animation to make a gif like the one below, which illustrates how the animation-delay values are symmetrical with respect to the middle.

Animated gif. Shows 16 black square tiles in a row, each of them growing from nothing to full size, then melting from the inside until they disappear, with the cycle then repeating. Only now, they don't all animate at the same time. The closer they are to the middle, the sooner they start their animation, those at the very ends of the row being one full cycle behind those in the very middle.
The staggered looping animation.

For a visual comparison between the two options, you can rerun the following demo to see what happens at the very beginning.

A fancier example would be the following:

Here, each and every one of the --n navigation links and corresponding recipe articles have an index --idx. Whenever a navigation link is hovered or focused, its --idx value is read and set to the current index, --k, on the body. If none of these items is hovered or focused, --k gets set to a value outside the [0, n) interval (e.g. -1).

The absolute value, --abs, of the difference between --k and a link’s index, --idx, can tell us whether that’s the currently selected (hovered or focused) item. If this absolute value is 0, then our item is the currently selected one (i.e. --not-sel is 0 and --sel is 1). If this absolute value is bigger than 0, then our item is not the currently selected one (i.e. --not-sel is 1 and --sel is 0).

Given both --idx and --k are integers, it results that their difference is also an integer. This means the absolute value, --abs, of this difference is either 0 (when the item is selected), or bigger or equal to 1 (when the item is not selected).

When we put all of this into code, this is what we get:

--abs: Max(var(--k) - var(--idx), var(--idx) - var(--k));
--not-sel: Min(1, var(--abs));
--sel: calc(1 - var(--not-sel));

The --sel and --not-sel properties (which are always integers that always add up to 1) determine the size of the navigation links (the width in the wide screen scenario and the height in the narrow screen scenario), whether they’re greyscaled or not and whether or not their text content is hidden. This is something we won’t get into here, as it is outside the scope of this article and I’ve already explained in a lot of detail in a previous one.

What is relevant here is that, when a navigation link is clicked, it slides out of sight (up in the wide screen case, and left in the narrow screen case), followed by all the others around it, each with a transition-delay that depends on how far they are from the one that was clicked (that is, on the absolute value, --abs, of the difference between their index, --idx, and the index of the currently selected item, --k), revealing the corresponding recipe article. These transition-delay values are symmetrical with respect to the currently selected item.

transition: transform 1s calc(var(--abs)*.05s);

The actual transition and delay are actually a bit more complex because more properties than just the transform get animated and, for transform in particular, there’s an additional delay when going back from the recipe article to the navigation links because we wait for the <article> element to disappear before we let the links slide down. But what were’re interested in is that component of the delay that makes the links is closer to the selected one start sliding out of sight before those further away. And that’s computed as above, using the --abs variable.

You can play with the interactive demo below.

Things get even more interesting in 2D, so let’s now make our row a grid!

We start by changing the structure a bit so that we have 8 columns and 8 rows (which means we have 8·8 = 64 items in total on the grid).

- let n = 8;
- let m = n*n;

style
  - for(let i = 0; i < n; i++)
    | .item:nth-child(#{n}n + #{i + 1}) { --i: #{i} }
    | .item:nth-child(n + #{n*i + 1}) { --j: #{i} }
.wrap(style=`--n: ${n}`)
  - for(let i = 0; i < m; i++)
    .item

The above Pug code compiles to the following HTML:

<style>
  .item:nth-child(8n + 1) { --i: 0 } /* items on 1st column */
  .item:nth-child(n + 1) { --j: 0 } /* items starting from 1st row */
  .item:nth-child(8n + 2) { --i: 1 } /* items on 2nd column */
  .item:nth-child(n + 9) { --j: 1 } /* items starting from 2nd row */
  /* 6 more such pairs */
</style>
<div class='wrap' style='--n: 8'>
  <div class='item'></div>
  <div class='item'></div>
  <!-- 62 more such items -->
</div>

Just like the previous case, we compute a middle index, --m, but since we’ve moved from 1D to 2D, we now have two differences in absolute value to compute, one for each of the two dimensions (one for the columns, --abs-i, and one for the rows, --abs-j).

--m: calc(.5*(var(--n) - 1));
--abs-i: max(var(--m) - var(--i), var(--i) - var(--m));
--abs-j: max(var(--m) - var(--j), var(--j) - var(--m));

We use the exact same two sets of @keyframes, but the animation-delay changes a bit, so it depends on both --abs-i and --abs-j. These absolute values can be as small as 0 (for tiles in the dead middle of the columns and rows) and as big as --m (for tiles at the ends of the columns and rows), meaning that the ratio between either of them and --m is always in the [0, 1] interval. This means the sum of these two ratios is always in the [0, 2] interval. If we want to reduce it to the [0, 1] interval, we need to divide it by 2 (or multiply by .5, same thing).

animation-delay: calc(.5*(var(--abs-i)/var(--m) + var(--abs-j)/var(--m))*#{$t});

This gives us delays that are in the [0s, $t] interval. We can take the denominator, var(--m), out of the parenthesis to simplify the above formula a bit:

animation-delay: calc(.5*(var(--abs-i) + var(--abs-j))/var(--m)*#{$t});

Just like the previous case, this makes grid items start animating later the further they are from the middle of the grid. We should use animation-fill-mode: backwards to ensure they stay in the state specified by the 0% keyframes until the delay time has elapsed and they start animating.

Alternatively, we can subtract one animation duration $t from all delays to make sure all grid items have already started their animation when the page loads.

animation-delay: calc((.5*(var(--abs-i) + var(--abs-j))/var(--m) - 1)*#{$t});

This gives us the following result:

Animated gif. Shows an 8x8 grid of tiles, each of them growing from nothing to full size, then melting from the inside until they disappear, with the cycle then repeating. The smaller the sum of their distances to the middle is, the sooner they start their animation, those at the very corners of the grid being one full cycle behind those in the very middle.
The staggered 2D animation (live demo).

Let’s now see a few more interesting examples. We won’t be going into details about the “how” behind them as the symmetrical value technique works exactly the same as for the previous ones and the rest is outside the scope of this article. However, there is a link to a CodePen demo in the caption for each of the examples below, and most of these Pens also come with a recording that shows me coding them from scratch.

In the first example, each grid item is made up of two triangles that shrink down to nothing at opposite ends of the diagonal they meet along and then grow back to full size. Since this is an alternating animation, we let the delays to stretch across two iterations (a normal one and a reversed one), which means we don’t divide the sum of ratios in half anymore and we subtract 2 to ensure every item has a negative delay.

animation: s $t ease-in-out infinite alternate;
animation-delay: calc(((var(--abs-i) + var(--abs-j))/var(--m) - 2)*#{$t});

In the second example, each grid item has a gradient at an angle that animates from 0deg to 1turn. This is possible via Houdini as explained in this article about the state of animating gradients with CSS.

The third example is very similar, except the animated angle is used by a conic-gradient instead of a linear one and also by the hue of the first stop.

In the fourth example, each grid cell contains seven rainbow dots that oscillate up and down. The oscillation delay has a component that depends on the cell indices in the exact same manner as the previous grids (the only thing that’s different here is the number of columns differs from the number of rows, so we need to compute two middle indices, one along each of the two dimensions) and a component that depends on the dot index, --idx, relative to the number of dots per cell, --n-dots.

--k: calc(var(--idx)/var(--n-dots));
--mi: calc(.5*(var(--n-cols) - 1));
--abs-i: max(var(--mi) - var(--i), var(--i) - var(--mi));
--mj: calc(.5*(var(--n-rows) - 1));
--abs-j: max(var(--mj) - var(--j), var(--j) - var(--mj));
animation-delay: 
  calc((var(--abs-i)/var(--mi) + var(--abs-j)/var(--mj) + var(--k) - 3)*#{$t});

In the fifth example, the tiles making up the cube faces shrink and move inwards. The animation-delay for the top face is computed exactly as in our first 2D demo.

In the sixth example, we have a grid of columns oscillating up and down.

The animation-delay isn’t the only property we can set to have symmetrical values. We can also do this with the items’ dimensions. In the seventh example below, the tiles are distributed around half a dozen rings starting from the vertical (y) axis and are scaled using a factor that depends on how far they are from the top point of the rings. This is basically the 1D case with the axis curved on a circle.

The eighth example shows ten arms of baubles that wrap around a big sphere. The size of these baubles depends on how far they are from the poles, the closest ones being the smallest. This is done by computing the middle index, --m, for the dots on an arm and the absolute value, --abs, of the difference between it and the current bauble index, --j, then using the ratio between this absolute value and the middle index to get the sizing factor, --f, which we then use when setting the padding.

--m: calc(.5*(var(--n-dots) - 1));
--abs: max(var(--m) - var(--j), var(--j) - var(--m));
--f: calc(1.05 - var(--abs)/var(--m));
padding: calc(var(--f)*#{$r});

Different styles for items before and after a certain (selected or middle) one

Let’s say we have a bunch of radio buttons and labels, with the labels having an index set as a custom property, --i. We want the labels before the selected item to have a green background, the label of the selected item to have a blue background and the rest of the labels to be grey. On the body, we set the index of the currently selected option as another custom property, --k.

- let n = 8;
- let k = Math.round((n - 1)*Math.random());

body(style=`--k: ${k}`)
  - for(let i = 0; i < n; i++)
    - let id = `r${i}`;
    input(type='radio' name='r' id=id checked=i===k)
    label(for=id style=`--i: ${i}`) Option ##{i}

This compiles to the following HTML:

<body style='--k: 1'>
  <input type='radio' name='r' id='r0'/>
  <label for='r0' style='--i: 0'>Option #0</label>
  <input type='radio' name='r' id='r1' checked='checked'/>
  <label for='r1' style='--i: 1'>Option #1</label>
  <input type='radio' name='r' id='r2'/>
  <label for='r2' style='--i: 2'>Option #2</label>
  <!-- more options -->
</body>

We set a few layout and prettifying styles, including a gradient background on the labels that creates three vertical stripes, each occupying a third of the background-size (which, for now, is just the default 100%, the full element width):

$c: #6daa7e, #335f7c, #6a6d6b;

body {
  display: grid;
  grid-gap: .25em 0;
  grid-template-columns: repeat(2, max-content);
  align-items: center;
  font: 1.25em/ 1.5 ubuntu, trebuchet ms, sans-serif;
}

label {
  padding: 0 .25em;
  background: 
    linear-gradient(90deg, 
      nth($c, 1) 33.333%, 
      nth($c, 2) 0 66.667%, 
      nth($c, 3) 0);
  color: #fff;
  cursor: pointer;
}
Screenshot showing radio inputs and their labels on two grid columns. The labels have a vertical three stripe background with the first stripe being green, the second one blue and the last one grey.
The result so far.

From the JavaScript, we update the value of --k whenever we select a different option:

addEventListener('change', e => {
  let _t = e.target;
	
  document.body.style.setProperty('--k', +_t.id.replace('r', ''))
})

Now comes the interesting part! For our label elements, we compute the sign, --sgn, of the difference between the label index, --i, and the index of the currently selected option, --k. We then use this --sgn value to compute the background-position when the background-size is set to 300% — that is, three times the label’s width because we may have of three possible backgrounds: one for the case when the label is for an option before the selected one, a second for the case when the label is for the selected option, and a third for the case when the label is for an option after the selected one.

--sgn: clamp(-1, var(--i) - var(--k), 1);
background: 
  linear-gradient(90deg, 
      nth($c, 1) 33.333%, 
      nth($c, 2) 0 66.667%, 
      nth($c, 3) 0) 
    calc(50%*(1 + var(--sgn)))/ 300%

If --i is smaller than --k (the case of a label for an option before the selected one), then --sgn is -1 and the background-position computes to 50%*(1 + -1) = 50%*0 = 0%, meaning we only see the first vertical stripe (the green one).

If --i is equal --k (the case of the label for the selected option), then --sgn is 0 and the background-position computes to 50%*(1 + 0) = 50%*1 = 50%, so we only see the vertical stripe in the middle (the blue one).

If --i is greater than --k (the case of a label for an option after the selected one), then --sgn is 1 and the background-position computes to 50%*(1 + 1) = 50%*2 = 100%, meaning we only see the last vertical stripe (the grey one).

A more aesthetically appealing example would be the following navigation where the vertical bar is on the side closest to the selected option and, for the selected one, it spreads across the entire element.

This uses a structure that’s similar to that of the previous demo, with radio inputs and labels for the navigation items. The moving “background” is actually an ::after pseudo-element whose translation value depends on the sign, --sgn. The text is a ::before pseudo-element whose position is supposed to be in the middle of the white area, so its translation value also depends on --sgn.

/* relevant styles */
label {
  --sgn: clamp(-1, var(--k) - var(--i), 1);
  
  &::before {
    transform: translate(calc(var(--sgn)*-.5*#{$pad}))
  }
  &::after {
    transform: translate(calc(var(--sgn)*(100% - #{$pad})))
  }
}

Let’s now quickly look at a few more demos where computing the sign (and maybe the absolute value as well) comes in handy.

First up, we have a square grid of cells with a radial-gradient whose radius shrinks from covering the entire cell to nothing. This animation has a delay computed as explained in the previous section. What’s new here is that the coordinates of the radial-gradient circle depend on where the cell is positioned with respect to the middle of the grid — that is, on the signs of the differences between the column --i and row --j indices and the middle index, --m.

/* relevant CSS */
$t: 2s;

@property --p {
  syntax: '<length-percentage>';
  initial-value: -1px;
  inherits: false;
}

.cell {
  --m: calc(.5*(var(--n) - 1));
  --dif-i: calc(var(--m) - var(--i));
  --abs-i: max(var(--dif-i), -1*var(--dif-i));
  --sgn-i: clamp(-1, var(--dif-i)/.5, 1);
  --dif-j: calc(var(--m) - var(--j));
  --abs-j: max(var(--dif-j), -1*var(--dif-j));
  --sgn-j: clamp(-1, var(--dif-j)/.5, 1);
  background: 
    radial-gradient(circle
      at calc(50% + 50%*var(--sgn-i)) calc(50% + 50%*var(--sgn-j)), 
      currentcolor var(--p), transparent calc(var(--p) + 1px))
      nth($c, 2);
  animation-delay: 
    calc((.5*(var(--abs-i) + var(--abs-j))/var(--m) - 1)*#{$t});
}

@keyframes p { 0% { --p: 100%; } }

Then we have a double spiral of tiny spheres where both the sphere diameter --d and the radial distance --x that contributes to determining the sphere position depend on the absolute value --abs of the difference between each one’s index, --i, and the middle index, --m. The sign, --sgn, of this difference is used to determine the spiral rotation direction. This depends on where each sphere is with respect to the middle – that is, whether its index ,--i, is smaller or bigger than the middle index, --m.

/* relevant styles */
--m: calc(.5*(var(--p) - 1));
--abs: max(calc(var(--m) - var(--i)), calc(var(--i) - var(--m)));
--sgn: clamp(-1, var(--i) - var(--m), 1);
--d: calc(3px + var(--abs)/var(--p)*#{$d}); /* sphere diameter */
--a: calc(var(--k)*1turn/var(--n-dot)); /* angle used to determine sphere position */
--x: calc(var(--abs)*2*#{$d}/var(--n-dot)); /* how far from spiral axis */
--z: calc((var(--i) - var(--m))*2*#{$d}/var(--n-dot)); /* position with respect to screen plane */
width: var(--d); height: var(--d);
transform: 
  /* change rotation direction by changing x axis direction */
  scalex(var(--sgn)) 
  rotate(var(--a)) 
  translate3d(var(--x), 0, var(--z)) 
  /* reverse rotation so the sphere is always seen from the front */
  rotate(calc(-1*var(--a))); 
  /* reverse scaling so lighting on sphere looks consistent */
  scalex(var(--sgn))

Finally, we have a grid of non-square boxes with a border. These boxes have a mask created using a conic-gradient with an animated start angle, --ang. Whether these boxes are flipped horizontally or vertically depends on where they are with respect to the middle – that is, on the signs of the differences between the column --i and row --j indices and the middle index, --m. The animation-delay depends on the absolute values of these differences and is computed as explained in the previous section. We also have a gooey filter for a nicer “wormy” look, but we won’t be going into that here.

/* relevant CSS */
$t: 1s;

@property --ang {
  syntax: '<angle>';
  initial-value: 0deg;
  inherits: false;
}

.box {
  --m: calc(.5*(var(--n) - 1));
  --dif-i: calc(var(--i) - var(--m));
  --dif-j: calc(var(--j) - var(--m));
  --abs-i: max(var(--dif-i), -1*var(--dif-i));
  --abs-j: max(var(--dif-j), -1*var(--dif-j));
  --sgn-i: clamp(-1, 2*var(--dif-i), 1);
  --sgn-j: clamp(-1, 2*var(--dif-j), 1);
  transform: scale(var(--sgn-i), var(--sgn-j));
  mask:
    repeating-conic-gradient(from var(--ang, 0deg), 
        red 0% 12.5%, transparent 0% 50%);
  animation: ang $t ease-in-out infinite;
  animation-delay: 
    calc(((var(--abs-i) + var(--abs-j))/var(--n) - 1)*#{$t});
}

@keyframes ang { to { --ang: .5turn; } }

Time (and not only) formatting

Let’s say we have an element for which we store a number of seconds in a custom property, --val, and we want to display this in a mm:ss format, for example.

We use the floor of the ratio between --val and 60 (the number of seconds in a minute) to get the number of minutes and modulo for the number of seconds past that number of minutes. Then we use a clever little counter trick to display the formatted time in a pseudo-element.

@property --min {
  syntax: '<integer>';
  initial-value: 0;
  inherits: false;
}

code {
  --min: calc(var(--val)/60 - .5);
  --sec: calc(var(--val) - var(--min)*60);
  counter-reset: min var(--min) sec var(--sec);
  
  &::after {
    /* so we get the time formatted as 02:09 */
    content: 
      counter(min, decimal-leading-zero) ':' 
      counter(sec, decimal-leading-zero);
  }
}

This works in most situations, but we encounter a problem when --val is exactly 0. In this case, 0/60 is 0 and then subtracting .5, we get -.5, which gets rounded to what’s the bigger adjacent integer in absolute value. That is, -1, not 0! This means our result will end up being -01:60, not 00:00!

Fortunately, we have a simple fix and that’s to slightly alter the formula for getting the number of minutes, --min:

--min: max(0, var(--val)/60 - .5);

There are other formatting options too, as illustrated below:

/* shows time formatted as 2:09 */
content: counter(min) ':' counter(sec, decimal-leading-zero);

/* shows time formatted as 2m9s */
content: counter(min) 'm' counter(sec) 's';

We can also apply the same technique to format the time as hh:mm:ss (live test).

@property --hrs {
  syntax: '<integer>';
  initial-value: 0;
  inherits: false;
}

@property --min {
  syntax: '<integer>';
  initial-value: 0;
  inherits: false;
}

code {
  --hrs: max(0, var(--val)/3600 - .5);
  --mod: calc(var(--val) - var(--hrs)*3600);
  --min: max(0, var(--mod)/60 - .5);
  --sec: calc(var(--mod) - var(--min)*60);
  counter-reset: hrs var(--hrs) var(--min) sec var(--sec);
  
  &::after {
    /* so we get the time formatted as 00:02:09 */
    content: 
      counter(hrs, decimal-leading-zero) ':' 
      counter(min, decimal-leading-zero) ':' 
      counter(sec, decimal-leading-zero);
  }
}

This is a technique I’ve used for styling the output of native range sliders such as the one below.

Screenshot showing a styled slider with a tooltip above the thumb indicating the elapsed time formatted as mm:ss. On the right of the slider, there's the remaining time formatted as -mm:ss.
Styled range input indicating time (live demo)

Time isn’t the only thing we can use this for. Counter values have to be integer values, which means the modulo trick also comes in handy for displaying decimals, as in the second slider seen below.

Screenshot showing three styled sliders withe second one having a tooltip above the thumb indicating the decimal value.
Styled range inputs, one of which has a decimal output (live demo)

A couple more such examples:

Screenshot showing multiple styled sliders with the third one being focused and showing a tooltip above the thumb indicating the decimal value.
Styled range inputs, one of which has a decimal output (live demo)
Screenshot showing two styled sliders with the second one being focused and showing a tooltip above the thumb indicating the decimal value.
Styled range inputs, one of which has a decimal output (live demo)

Even more use cases

Let’s say we have a volume slider with an icon at each end. Depending on the direction we move the slider’s thumb in, one of the two icons gets highlighted. This is possible by getting the absolute value, --abs, of the difference between each icon’s sign, --sgn-ico (-1 for the one before the slider, and 1 for the one after the slider), and the sign of the difference, --sgn-dir, between the slider’s current value, --val, and its previous value, --prv. If this is 0, then we’re moving in the direction of the current icon so we set its opacity to 1. Otherwise, we’re moving away from the current icon, so we keep its opacity at .15.

This means that, whenever the range input’s value changes, not only do we need to update its current value, --val, on its parent, but we need to update its previous value, which is another custom property, --prv, on the same parent wrapper:

addEventListener('input', e => {
  let _t = e.target, _p = _t.parentNode;
	
  _p.style.setProperty('--prv', +_p.style.getPropertyValue('--val'))
  _p.style.setProperty('--val', +_t.value)
})

The sign of their difference is the sign of the direction, --sgn-dir, we’re going in and the current icon is highlighted if its sign, --sgn-ico, and the sign of the direction we’re going in, --sgn-dir, coincide. That is, if the absolute value, --abs, of their difference is 0 and, at the same time, the parent wrapper is selected (it’s either being hovered or the range input in it has focus).

[role='group'] {
  --dir: calc(var(--val) - var(--prv));
  --sgn-dir: clamp(-1, var(--dir), 1);
  --sel: 0; /* is the slider focused or hovered? Yes 1/ No 0 */
  
  &:hover, &:focus-within { --sel: 1; }
}

.ico {
  --abs: max(var(--sgn-dir) - var(--sgn-ico), var(--sgn-ico) - var(--sgn-dir));
  --hlg: calc(var(--sel)*(1 - min(1, var(--abs)))); /* highlight current icon? Yes 1/ No 0 */
  opacity: calc(1 - .85*(1 - var(--hlg)));
}

Another use case is making property values of items on a grid depend on the parity of the sum of horizontal --abs-i and vertical --abs-j distances from the middle, --m. For example, let’s say we do this for the background-color:

@property --floor {
  syntax: '<integer>';
  initial-value: 0;
  inherits: false;
}

.cell {
  --m: calc(.5*(var(--n) - 1));
  --abs-i: max(var(--m) - var(--i), var(--i) - var(--m));
  --abs-j: max(var(--m) - var(--j), var(--j) - var(--m));
  --sum: calc(var(--abs-i) + var(--abs-j));
  --floor: max(0, var(--sum)/2 - .5);
  --mod: calc(var(--sum) - var(--floor)*2);
  background: hsl(calc(90 + var(--mod)*180), 50%, 65%);
}
Screenshot showing a 16x16 grid where each tile is either lime or purple.
Background depending on parity of sum of horizontal and vertical distances to the middle (live demo)

We can spice things up by using the modulo 2 of the floor of the sum divided by 2:

@property --floor {
  syntax: '<integer>';
  initial-value: 0;
  inherits: false;
}

@property --int {
  syntax: '<integer>';
  initial-value: 0;
  inherits: false;
}

.cell {
  --m: calc(.5*(var(--n) - 1));
  --abs-i: max(var(--m) - var(--i), var(--i) - var(--m));
  --abs-j: max(var(--m) - var(--j), var(--j) - var(--m));
  --sum: calc(var(--abs-i) + var(--abs-j));
  --floor: max(0, var(--sum)/2 - .5);
  --int: max(0, var(--floor)/2 - .5);
  --mod: calc(var(--floor) - var(--int)*2);
  background: hsl(calc(90 + var(--mod)*180), 50%, 65%);
}
Screenshot showing a 16x16 grid where each tile is either lime or purple.
A more interesting variation of the previous demo (live demo)

We could also make both the direction of a rotation and that of a conic-gradient() depend on the same parity of the sum, --sum, of horizontal --abs-i and vertical --abs-j distances from the middle, --m. This is achieved by horizontally flipping the element if the sum, --sum, is even. In the example below, the rotation and size are also animated via Houdini (they both depend on a custom property, --f, which we register and then animate from 0 to 1), and so are the worm hue, --hue, and the conic-gradient() mask, both animations having a delay computed exactly as in previous examples.

@property --floor {
  syntax: '<integer>';
  initial-value: 0;
  inherits: false;
}

.🐛 {
  --m: calc(.5*(var(--n) - 1));
  --abs-i: max(var(--m) - var(--i), var(--i) - var(--m));
  --abs-j: max(var(--m) - var(--j), var(--j) - var(--m));
  --sum: calc(var(--abs-i) + var(--abs-j));
  --floor: calc(var(--sum)/2 - .5);
  --mod: calc(var(--sum) - var(--floor)*2);
  --sgn: calc(2*var(--mod) - 1); /* -1 if --mod is 0; 1 id --mod is 1 */
  transform: 
    scalex(var(--sgn)) 
    scale(var(--f)) 
    rotate(calc(var(--f)*180deg));
  --hue: calc(var(--sgn)*var(--f)*360);
}

Finally, another big use case for the techniques explained so far is shading not just convex, but also concave animated 3D shapes using absolutely no JavaScript! This is one topic that’s absolutely massive on its own and explaining everything would take an article as long as this one, so I won’t be going into it at all here. But I have made a few videos where I code a couple of such basic pure CSS 3D shapes (including a wooden star and a differently shaped metallic one) from scratch and you can, of course, also check out the CSS for the following example on CodePen.





Source link

testing-chemical-in-dish
Strategy

How Selenium 4 Relative Locator Can Change The Way You Test


testing-chemical-in-dish

Web pages can consist of the number of web elements or GUI elements like radio buttons, text boxes, drop-downs, inputs, etc. Web locators in the context of Selenium automation testing are used to perform different actions on the web elements of a page. Which makes it no surprise that as a new Selenium user, the first thing we aim to learn is Selenium Locators.

These locators are the bread and butter of any Selenium automation testing framework, no matter the type of testing you are doing, ranging from unit testing to end-to-end, automated, cross-browser testing. There are many types of locators used, such as CSS Selector, XPath, Link, Text, ID, etc. So far, you get eight types of locators in Selenium. This number, however, is going to change in the new Selenium 4 release. Wondering why?

Well, with Selenium 3.0, each element is accessed separately, as there is no way to access a web element relative to nearby elements. This is where the new locator in Selenium 4 (Alpha) can be instrumental; the new locator methods allow you to find the nearby elements based on their visual location relative to other elements in the DOM.

You may also like: Top 13 Resources for Learning Selenium Automation.

Yep!! You heard it right. Selenium 4 will bring out a new Locator that has been in plans for quite some time, called Relative Locator. In this post, we are going to do a deep dive into how you can use the latest Selenium 4 Relative Locator for your daily automation testing.

We covered the features that you can expect from Selenium 4 in our previous post. And in that post itself, we alluded that we would be going in more details on new features. Well, here it is.

giphy

Downloading Selenium 4 (Alpha)

Indisputably the most used web automation testing framework, Selenium, is widely used for end to end testing with a special special set of features that provide unparalleled automated cross browser testing capabilities. However, the last major number release i.e. Selenium 3.0 was released nearly 3 years ago in October of 2016. Though there is no release date as of yet, and officially Selenium 4 is not formally released, you can get a sneak peek through Selenium 4’s Alpha release.

To start with, you have to download the Selenium 4 Alpha from the Maven repository. At the time of covering the Selenium 4 relative locator functionality as a part of this article, the latest version was 4.0.0-alpha-3. As this is an Alpha release of Selenium, we recommend switching back to the stable version i.e. 3.141.XX if you do not want to take any risks with your production test suite as you validate with Selenium automation testing.

seleniun4

Selenium 4

Selenium 4 Relative Locator – Methods

As of now, Selenium 4 relative locator methods support usage with withTagName attribute. Following are the ‘relative locator’ options that can be used in Selenium automation testing:

RELATIVE LOCATOR DESCRIPTION
above Web element to be searched/located appears above the specified element.
below Web element to be searched/located appears below the specified element.
toLeftOf Web element to be searched/located appears to the left of the specified element.
toRightOf Web element to be searched/located appears to the right of the specified element.
near Web element to be searched/located is at most 50 pixels away from the specified element.

Here is a screenshot of the implementation that highlights the usage of relative locators in Selenium automation testing (Source).

selenium4 locator

Selenium 4 Locator

If you are wondering about how Selenium does it, well, it does so with the help of a JavaScript method called getBoundingClientRect(). This JavaScript method allows Selenium to locate the elements using the relative locators for Selenium testing.

Selenium 4 Relative Locator – Usage

The methods for relative locators in Selenium 4 are overloaded and can take relative WebElement or By locator as an argument. Shown below is a sample usage of the relative locator for Selenium automation testing using both the options:

WebElement txt_label = driver.findElement(By.cssSelector(“label[id=’uname’]”));
WebElement txt_label = driver.findElement(withTagName(“input”).toRightOf(txt_label));
String txt_name = driver.findElement(withTagName(“input”).toLeftOf(By.id(“some_button”))

view rawSelenium_4_Relative_Locator_Usage.java hosted with ❤ by GitHub.

Selenium 4 Relaative Locator

Execute Selenium Automation Testing With Relative Locator

Let’s get into action with the new Selenium 4 Relative Locator to perform automated cross browser testing. I am going to perform a trial run of Selenium 4 (Alpha) along with the local Chrome WebDriver. But before that, I am going to create a Maven project for implementation and testing. I will be using the TestNG framework as it can be easily integrated with Maven. Also, because of the built-in annotations (e.g. @BeforeClass, @AfterClass, @Test, etc.) that offers more clarity on the automation tests being triggered.

For both the tests that will be demonstrated further, the Project Object Model (pom.xml) file for the Maven project should be updated with the project configuration [including Selenium 4(Alpha)].

<project xmlns=”http://maven.apache.org/POM/4.0.0″ xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xsi:schemaLocation=”http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd”>
<modelVersion>4.0.0</modelVersion>
<groupId>Group-Name</groupId>
<artifactId>Artifact-Name</artifactId>
<version>0.0.1-SNAPSHOT</version>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>10</source>
<target>10</target>
</configuration>
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-java</artifactId>
<version>4.0.0-alpha-3</version>
</dependency>
<dependency>
<groupId>org.testng</groupId>
<artifactId>testng</artifactId>
<version>7.0.0</version>
<scope>compile</scope>
</dependency>
<!– https://mvnrepository.com/artifact/org.slf4j/slf4j-nop –>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-nop</artifactId>
<version>1.7.28</version>
<scope>test</scope>
</dependency>
</dependencies>
</project>

<project xmlns=”http://maven.apache.org/POM/4.0.0″ xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xsi:schemaLocation=”http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd”>

<modelVersion>4.0.0</modelVersion>
<groupId>Group-Name</groupId>
<artifactId>Artifact-Name</artifactId>
<version>0.0.1-SNAPSHOT</version>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>10</source>
<target>10</target>
</configuration>
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-java</artifactId>
<version>4.0.0-alpha-3</version>
</dependency>
<dependency>
<groupId>org.testng</groupId>
<artifactId>testng</artifactId>
<version>7.0.0</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.github.bonigarcia</groupId>
<artifactId>webdrivermanager</artifactId>
<version>3.0.0</version>
<scope>compile</scope>
</dependency>
<!– https://mvnrepository.com/artifact/org.slf4j/slf4j-nop –>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-nop</artifactId>
<version>1.7.28</version>
<scope>test</scope>
</dependency>
</dependencies>
</project>

view rawProject_object_model.xml hosted with ❤ by GitHub

Example 1 For Selenium 4 Relative Locators

In the first example that demonstrates the usage of Selenium 4 relative locators, the intent is to automate the login to LambdaTest. As the test is performed on the Chrome browser, you should ensure that the Chrome WebDriver is available on the machine.

import io.github.bonigarcia.wdm.WebDriverManager;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.testng.annotations.AfterClass;
import org.testng.annotations.BeforeClass;
import org.testng.annotations.Test;
import static org.openqa.selenium.support.locators.RelativeLocator.withTagName;
import static org.testng.Assert.assertEquals;
import org.testng.annotations.Test;
import java.net.MalformedURLException;
import java.net.URL;
import java.util.concurrent.TimeUnit;
public class MavenRelocators {
private WebDriver driver;
boolean status = false;
@BeforeClass
public void setUp(){
System.setProperty(“webdriver.chrome.driver”,”C:\location-of-chromedriver.exe”);
driver = new ChromeDriver();
driver.get(“https://accounts.lambdatest.com/login”);
driver.manage().window().maximize();
driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
}
@AfterClass
public void tearDown() throws Exception {
if (driver != null) {
driver.quit();
}
}
@Test
public void test_login_using_relative_locators_1(){
// Find the label element above the login text box
WebElement heightLabel = driver.findElement(By.xpath(“//*[@id=’app’]/section/form/div/div/h1”));
// Locate the textbox where username should be inputted
WebElement heightUserBox = driver.findElement(withTagName(“input”)
.below(heightLabel));
heightUserBox.sendKeys(“user-name”);
// Locate the textbox where password should be inputted
WebElement heightPasswordBox = driver.findElement(withTagName(“input”)
.below(heightUserBox));
heightPasswordBox.sendKeys(“password”);
// Locate the submit button
WebElement submitbutton = driver.findElement(By.xpath(“//*[@id=’app’]/section/form/div/div/button”));
submitbutton.click();
//Wait for 10 seconds to observe the output
driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
}
}

view rawRelativeLocators_1.java hosted with ❤ by GitHub

To find the input field where the username i.e. email-address has to be entered; we first locate the label that is above the input box using By.xpath method. To get the details of the web element i.e. XPath in this case, you should make use of Inspect option in the Chrome browser.

Code Walkthrough:

WebElement heightUserBox = driver.findElement(withTagName(“input”)

view rawfind_element_with_tag_name.java hosted with ❤ by GitHub

As seen in the above statement, the input argument to the FindElement method is withTagName. On successful execution, it returns a RelativeLocator.RelativeBy object. The output will be relative to the WebElement heightLabel.

We use the located element to find the field where username has to be inputted. As the input element (for user name) is right below the label, we make use of the below option along with the withTagName()  method.

WebElement heightLabel = driver.findElement(By.xpath(“//*[@id=’app’]/section/form/div/div/h1”));
// Locate the textbox where username should be inputted
WebElement heightUserBox = driver.findElement(withTagName(“input”)
.below(heightLabel));
heightUserBox.sendKeys(“user-name”);

view rawfind_element2.java hosted with ❤ by GitHub

The web element located below the email input box is the password input box. As the relative location of email input box is already known, the below option is used to locate the password input box.

lambdatest login page

LambdaTest login page
WebElement heightPasswordBox = driver.findElement(withTagName(“input”)
.below(heightUserBox));
heightPasswordBox.sendKeys(“password”);

view rawinputbox.java hosted with ❤ by GitHub

To execute the test, right-click on the project and select the option ‘Run As -> TestNG Test’.

testng

Example 2 for Selenium 4 Relative Locators

In this example, as we demonstrate the usage of Selenium 4 relative locators, the intent is to add a new entry in the LambdaTest Sample App. It comprises of two other tests where the sought-after web element is located and verified via its attribute (name/id).

package RelativeLocators;
import io.github.bonigarcia.wdm.WebDriverManager;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.testng.annotations.AfterClass;
import org.testng.annotations.BeforeClass;
import org.testng.annotations.Test;
import static org.openqa.selenium.support.locators.RelativeLocator.withTagName;
import static org.testng.Assert.assertEquals;
import org.testng.annotations.Test;
import java.net.MalformedURLException;
import java.net.URL;
import java.util.concurrent.TimeUnit;
public class RelativeLocators {
private WebDriver driver;
boolean status = false;
@BeforeClass
public void setUp(){
System.setProperty(“webdriver.chrome.driver”,”C:\Location-To\chromedriver.exe”);
driver = new ChromeDriver();
driver.get(“https://4dvanceboy.github.io/lambdatest/lambdasampleapp.html”);
driver.manage().window().maximize();
driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
}
@AfterClass
public void tearDown() throws Exception {
if (driver != null) {
driver.quit();
}
}
@Test
public void test_login_using_relative_locators_1(){
String name = driver.findElement(withTagName(“input”)
.above(By.name(“li5”))
.below(By.name(“li3”)))
.getAttribute(“name”);
assertEquals(name, “li4”);
driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
}
@Test
public void test_login_using_relative_locators_2(){
String txt_name = driver.findElement(withTagName(“input”)
.toLeftOf(By.id(“addbutton”))
.below(By.name(“li5”)))
.getAttribute(“id”);
assertEquals(txt_name, “sampletodotext”);
driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
}
@Test
public void test_login_using_relative_locators_3(){
WebElement txt_name = driver.findElement(withTagName(“input”)
.toLeftOf(By.id(“addbutton”))
.below(By.name(“li5”)));
txt_name.sendKeys(“Relative locators test”);
// Get details of the Submit/Add button
WebElement submitbutton = driver.findElement(By.xpath(“//*[@id=’addbutton’]”));
// Submit the new entry
submitbutton.click();
driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
}
}

view rawRelativeLocators_2.java hosted with ❤ by GitHub

Let us decode the above example that comprises of three different tests. Before we jump into the details of any test, it is important that we have a look at the DOM snippet for the app.

<ul class=”list-unstyled”>
<!– ngRepeat: sampletodo in sampleList.sampletodos –><li ng-repeat=”sampletodo in sampleList.sampletodos” class=”ng-scope”>
<input type=”checkbox” ng-model=”sampletodo.done” name=”li1″ class=”ng-pristine ng-untouched ng-valid”>
<span class=”done-false”>First Item</span>
</li><!– end ngRepeat: sampletodo in sampleList.sampletodos –><li ng-repeat=”sampletodo in sampleList.sampletodos” class=”ng-scope”>
<input type=”checkbox” ng-model=”sampletodo.done” name=”li2″ class=”ng-pristine ng-untouched ng-valid”>
<span class=”done-false”>Second Item</span>
</li><!– end ngRepeat: sampletodo in sampleList.sampletodos –><li ng-repeat=”sampletodo in sampleList.sampletodos” class=”ng-scope”>
<input type=”checkbox” ng-model=”sampletodo.done” name=”li3″ class=”ng-pristine ng-untouched ng-valid”>
<span class=”done-false”>Third Item</span>
</li><!– end ngRepeat: sampletodo in sampleList.sampletodos –><li ng-repeat=”sampletodo in sampleList.sampletodos” class=”ng-scope”>
<input type=”checkbox” ng-model=”sampletodo.done” name=”li4″ class=”ng-pristine ng-untouched ng-valid”>
<span class=”done-false”>Fourth Item</span>
</li><!– end ngRepeat: sampletodo in sampleList.sampletodos –><li ng-repeat=”sampletodo in sampleList.sampletodos” class=”ng-scope”>
<input type=”checkbox” ng-model=”sampletodo.done” name=”li5″ class=”ng-pristine ng-untouched ng-valid”>
<span class=”done-false”>Fifth Item</span>
</li><!– end ngRepeat: sampletodo in sampleList.sampletodos –>
</ul>

view rawlist-unstyled.html hosted with ❤ by GitHub

The fifth item in the DOM is represented in the DOM by name li5 and the third element is represented by the name li3.

Sub-test 1 — In the first test, the element with name li4 has to be located and assert is raised in case there is an error. The findElement method is called with the withTagName method and the TagName is input. As seen from the DOM tree and Inspect screenshot below, each CheckBox is of input type with the name corresponding to the input option i.e. li1, li2, li3, etc.

Checking multiple elements in DOM

Checking multiple elements in DOM

The input web element with name li4 (Fourth Item) is above li3 (Third Item) and below li5 (Fifth Item). Hence, we specify both of these as a part of the test.

@Test
public void test_login_using_relative_locators_1(){
String name = driver.findElement(withTagName(“input”)
.above(By.name(“li5”))
.below(By.name(“li3”)))
.getAttribute(“name”);
assertEquals(name, “li4”);
driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
}

view rawsampletodotext.java hosted with ❤ by GitHub

Sub-test 2 — In this test, the input element with the name sampletodotext has to be located. This element is of input type and located to the left of Add button (i.e.  id = addbutton) and below the element with name li5 (Fifth Item).

subtest

Subtest example
@Test
public void test_login_using_relative_locators_2(){
String txt_name = driver.findElement(withTagName(“input”)
.toLeftOf(By.id(“addbutton”))
.below(By.name(“li5”)))
.getAttribute(“id”);
assertEquals(txt_name, “sampletodotext”);
driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
}

view rawtest_login_using_relative_locators.java hosted with ❤ by GitHub

 Assert is raised if the name of the element (i.e. txt_name does not match the expected name i.e.  sampletodotext).

Sub-test 3 — This test is a logical extension of sub-test 2, where a new item/option has to be added to the lambdasampleapp.

For doing the same, input WebElement to the left of element with id = addbutton [.toLeftOf(By.id("addbutton")] and below the element with name = li5 [.below(By.name("li5"))] has to be located.

selenium4

Selenium 4

As the input element is a textbox, sendKeys method is used to enter values into the textbox i.e. id = sampletodotext. The new option is added to the list by performing a click of the Add Button on the page.

@Test
public void test_login_using_relative_locators_3(){
WebElement txt_name = driver.findElement(withTagName(“input”)
.toLeftOf(By.id(“addbutton”))
.below(By.name(“li5”)));
txt_name.sendKeys(“Relative locators test”);
// Get details of the Submit/Add button
WebElement submitbutton = driver.findElement(By.xpath(“//*[@id=’addbutton’]”));
// Submit the new entry
submitbutton.click();
driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
}

view rawtest_login_using_relative_locators_2.java hosted with ❤ by GitHub

Similar to the Test – this project also has to be executed as a TestNG test. Shown below is the output screenshot where we can see that the last option i.e. Relative locators test has been added to the list.

testng test

We are sure that by now you would have got a good grip of Selenium 4 relative locator for Selenium automation testing. As this is Selenium 4’s Alpha release, you might need some more time before support for relative locators starts rolling out for other programming languages like Python, C#, etc.

What’s Your Opinion On The New Locator?

Relative locator in Selenium 4 is an interesting advancement using which developers can access nearby web elements with fewer lines of implementation. As this is an Alpha release, the features may change in further releases. It is important to note that Selenium 4 Relative Locator methods i.e. above, below, toLeftOf, toRightOf, and near do not work with overlapping elements.

If you are using Java with Selenium for automated cross-browser testing, you should definitely give Selenium 4 (Alpha) a spin. Though there are tools (open-source as well commercial) that offer features similar to Selenium 4 Relative Locator, Selenium 4 has many more features (including improved documentation) that make it worth the wait!

So, what do you make of the new locator for Selenium automation testing? Did you find the new Selenium 4 Relative Locator handy? Are you already planning to incorporate the relative locator in your automated cross-browser testing scripts? Or do you think it could do better? Let me know your opinion in the comment section. Happy testing! 

Further Reading



Source link

Learnings From a WebPageTest Session on CSS-Tricks
Strategy

Learnings From a WebPageTest Session on CSS-Tricks


I got together with Tim Kadlec from over at WebPageTest the other day to use do a bit of performance testing on CSS-Tricks. Essentially use the tool, poke around, and identify performance pain points to work on. You can watch the video right here on the site, or over on their Twitch channel, which is worth a subscribe for more performance investigations like these.

Web performance work is twofold:

Step 1) Measure Things & Explore Problems
Step 2) Fix it

Tim and I, through the amazing tool that is WebPageTest, did a lot of Step 1. I took notes as we poked around. We found a number of problem areas, some fairly big! Of course, after all that, I couldn’t get them out of my head, so I had to spring into action and do the Step 2 stuff as soon as I could, and I’m happy to report I’ve done most of it and seen improvement. Let’s dig in!

Identified Problem #1) Poor LCP

Largest Contentful Paint (LCP) is one of the Core Web Vitals (CWV), which everyone is carefully watching right now with Google telling us it’s an SEO factor. My LCP was clocking in at 3.993s which isn’t great.

WebPageTest clearly tells you if there are problems with your CWV.

I also learned from time that it’s ideal if the First Contentful Paint (FCP) contains the LCP. We could see that wasn’t happening through WebPageTest.

Things to fix:

  • Make sure the LCP area, which was ultimately a big image, is properly optimized, has a responsive srcset, and is CDN hosted. All those things were failing on that particular image despire working elsewhere.
  • The LCP image had loading="lazy" on it, which we just learned isn’t a good place for that.

Fixing technique and learnings:

  • All the proper image handling stuff was in place, but for whatever reason, none of it works for .gif files, which is what that image was the day of the testing. We probably just shouldn’t use .gif files for that area anyway.
  • Turn off lazy loading of LCP image. This is a WordPress featured image, so I essentially had to do <?php the_post_thumbnail('', array('loading' => 'eager')); ?>. If it was an inline image, I’d do <img data-no-lazy="1" ... /> which tells WordPress what it needs to know.

Identified Problem #2) First Byte to Start Render gap

Tim saw this right away as a fairly obvious problem.

In the waterfall above (here’s a super detailed article on reading waterfalls from Matt Hobbs), you can see the HTML arrives in about 0.5 seconds, but the start of rendering (what people see, big green line), doesn’t start until about 2.9 seconds. That’s too dang long.

The chart also identifies the problem in a yellow line. I was linking out to a third-party CSS file, which then redirects to my own CSS files that contain custom fonts. That redirect costs time, and as we dug into, not just first-page-load time, but every single page load, even cached page loads.

Things to fix:

  • Eliminate the CSS file redirect.
  • Self-host fonts.

Fixing technique and learnings:

  • I’ve been eying up some new fonts anyway. I noted not long ago that I really love Mass-Driver’s licensing innovation (priced by # of employees), but I equally love MD Primer, so I bought that. For body type, I stuck with a comfortable serif with Blanco, which mercifully came with very nicely optimized RIBBI1 versions. Next time I swear I’m gonna find a variable font, but hey, you gotta follow your heart sometimes. I purchased these, and am now self-hosting the font-files.
  • Use @font-face right in my own CSS, with no redirects. Also using font-display: swap;, but gotta work a bit more on that loading technique. Can’t wait for size-adjust.

After re-testing with the change in place, you can see on a big article page the start render is a full 2 seconds faster on a 4G connection:

That’s a biiiiiig change. Especially as it affects cached page loads too.
See how the waterfall pulls back to the left without the CSS redirect.

Identified Problem #3) CLS on the Grid Guide is Bad

Tim had a neat trick up his sleeve for measuring Cumulative Layout Shift (CLS) on pages. You can instruct WebPageTest to scroll down the page for you. This is important for something like CLS, because layout shifting might happen on account of scrolling.

See this article about CLS and WebPageTest.

The trick is using an advanced setting to inject custom JavaScript into the page during the test:

At this point, we were testing not the homepage, but purposefully a very important page: our Complete Guide to Grid. With this in place, you can see the CWV are in much worse shape:

I don’t know what to think exactly about the LCP. That’s being triggered by what happens to be the largest image pretty far down the page.

I’m not terribly worried about the LCP with the scrolling in place. That’s just some image like any other on the page, lazily loaded.

The CLS is more concerning, to me, because any shifting layout is always obnoxious to users. See all these dotted orange lines? That is CLS happening:

The orange CLS lines correlate with images loading (as the page scrolls down and the lazy loaded images come in).

Things to fix:

  • CLS is bad because of lazy loaded images coming in and shifting the layout.

Fixing technique and learnings:

  • I don’t know! All those images are inline <img loading="lazy" ...> elements. I get that lazy loading could cause CLS, but these images have proper width and height attributes, which is supposed to reserve the exact space necessary for the image (even when fluid, thanks to aspect ratio) even before it loads. So… what gives? Is it because they are SVG?

If anyone does know, feel free to hit me up. Such is the nature of performance work, I find. It’s a mixture of easy wins from silly mistakes, little battles you can fight and win, bigger battles that sometimes involves outside influences that are harder to win, and mysterious unknowns that it takes time to heal. Fortunately we have tools like WebPageTest to tell us the real stories happening on our site and give us the insight we need to fight these performance battles.




Source link