xkcd 844: Good Code (Randall Munroe)
Strategy

Common Anti-Patterns in Go – DZone Web Dev


It has been widely acknowledged that coding is an art, and like every artisan who crafts wonderful art and is proud of them, we as developers are also really proud of the code we write. In order to achieve the best results, artists constantly keep searching for ways and tools to improve their craft. Similarly, we as developers keep leveling up our skills and remain curious to know the answer to the single most important question — ‘How to write good code.’

Frederick P. Brooks in his book ‘The Mythical Man Month: Essays on Software Engineering‘ wrote:

“The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination. Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures.”

xkcd 844: Good Code (Randall Munroe)

Image source: https://xkcd.com/844/

This post tries to explore answers to the big question mark in the comic above. The simplest way to write good code is to abstain from including anti-patterns in the code we write.

What are Anti-Patterns? 

Anti-patterns occur when code is written without taking future considerations into account. Anti-patterns might initially appear to be an appropriate solution to the problem, but, in reality, as the codebase scales, these come out to be obscure and add ‘technical debt’ to our codebase.

A simple example of an anti-pattern is to write an API without considering how the consumers of the API might use it, as explained in example 1 below. Being aware of anti-patterns and consciously avoid using them while programming is surely a major step towards a more readable and maintainable codebase. In this post, let’s take a look at a few commonly seen anti-patterns in Go.

1. Returning Value of Unexported Type from An Exported Function

In Go, to export any field or variable we need to make sure that its name starts with an uppercase letter. The motivation behind exporting them is to make them visible to other packages. For example, if we want to use the Pi function from math package, we should address it as math.Pi . Using math.pi won’t work and will error out.

Names (struct fields, functions, variables) that start with a lowercase letter are unexported and are only visible inside the package they are defined in.

An exported function or method returning a value of an unexported type may be frustrating to use since callers of that function from other packages will have to define a type again to use it.

// Bad practice
type unexportedType string
func ExportedFunc() unexportedType {
  return unexportedType("some string")
}

// Recommended
type ExportedType string
func ExportedFunc() ExportedType {   
  return ExportedType("some string")
}

2. Unnecessary Use of Blank Identifier

In various cases, assigning value to a blank identifier is not needed and is unnecessary. In the case of using the blank identifier in for loop, the Go specification mentions:

If the last iteration variable is the blank identifier, the range clause is equivalent to the same clause without that identifier.

// Bad practice
for _ = range sequence
{
    run()
}

x, _ := someMap[key]

_ = <-ch

// Recommended
for range something
{
    run()
}

x := someMap[key]

<-ch

3. Using Loop/Multiple appends to Concatenate Two Slices

When appending multiple slices into one, there is no need to iterate over the slice and append each element one by one. Rather, it is much better and efficient to do that in a single append statement.

As an example, the below snippet does concatenation by appending elements one by one through iterating over sliceTwo.

for _, v := range sliceTwo {
    sliceOne = append(sliceOne, v)
}

But, since we know that append is a variadic function and thus, it can be invoked with zero or more arguments. Therefore, the above example can be re-written in a much simpler way by using only one append function call like this:

sliceOne = append(sliceOne, sliceTwo…)

4. Redundant Arguments in make Calls

The make function is a special built-in function used to allocate and initialize an object of type map, slice, or chan. For initializing a slice using make, we have to supply the type of slice, the length of the slice, and the capacity of the slice as arguments. In the case of initializing a map using make, we need to pass the size of the map as an argument.

make, however, already has default values for those arguments:

  • For channels, the buffer capacity defaults to zero (unbuffered).
  • For maps, the size allocated defaults to a small starting size.
  • For slices, the capacity defaults to the length if capacity is omitted.

Therefore,

ch = make(chan int, 0)
sl = make([]int, 1, 1)

can be rewritten as:

ch = make(chan int)
sl = make([]int, 1)

However, using named constants with channels is not considered as an anti-pattern, for the purposes of debugging, or accommodating math, or platform-specific code.

const c = 0
ch = make(chan int, c) // Not an anti-pattern

5. Useless return in Functions

It is not considered good practice to put a return statement as the final statement in functions that do not have a value to return.

// Useless return, not recommended
func alwaysPrintFoofoo() {
    fmt.Println("foofoo")
    return
}

// Recommended
func alwaysPrintFoo() {
    fmt.Println("foofoo")
}

Named returns should not be confused with useless returns, however. The return statement below really returns a value.

func printAndReturnFoofoo() (foofoo string) {
    foofoo := "foofoo"
    fmt.Println(foofoo)
    return
}

6. Useless break Statements in switch

In Go, switch statements do not have automatic fallthrough. In programming languages like C, the execution falls into the next case if the previous case lacks the break statement. But, it is commonly found that fallthrough in switch-case is used very rarely and mostly causes bugs. Thus, many modern programming languages, including Go, changed this logic to never fallthrough the cases by default.

Therefore, it is not required to have a break statement as the final statement in a case block of switch statements. Both the examples below act the same.

Bad pattern:

switch s {
case 1:
    fmt.Println("case one")
    break
case 2:
    fmt.Println("case two")
}

Good pattern:

switch s {
case 1:
    fmt.Println("case one")
case 2:
    fmt.Println("case two")
}

However, for implementing fallthrough in switch statements in Go, we can use the fallthrough statement. As an example, the code snippet given below will print 23.

switch 2 {
case 1:
    fmt.Print("1")
    fallthrough
case 2:
    fmt.Print("2")
    fallthrough
case 3:
    fmt.Print("3")
}

7. Not Using Helper Functions for Common Tasks

Certain functions, for a particular set of arguments, have shorthands that can be used instead to improve efficiency and better understanding/readability.

For example, in Go, to wait for multiple goroutines to finish, we can use a sync.WaitGroup. Instead of incrementing a sync.WaitGroup counter by 1 and then adding -1 to it in order to bring the value of the counter to 0 and in order to signify that all the goroutines have been executed :

wg.Add(1) 
// ...some code
wg.Add(-1)

It is easier and more understandable to use wg.Done() helper function which itself notifies the sync.WaitGroup about the completion of all goroutines without our need to manually bring the counter to 0.

wg.Add(1)
// ...some code
wg.Done()

8. Redundant nil Checks on Slices

The length of a nil slice evaluates to zero. Hence, there is no need to check whether a slice is nil or not, before calculating its length.

For example, the nil check below is not necessary.

if x != nil && len(x) != 0 {
    // do something
}

The above code could omit the nil check as shown below:

if len(x) != 0 {
    // do something
}

9. Too Complex Function Literals

Function literals that only call a single function can be removed without making any other changes to the value of the inner function, as they are redundant. Instead, the inner function that is being called inside the outer function should be called.

For example:

fn := func(x int, y int) int { return add(x, y) }

Can be simplified as:

fn := add

10. Using select Statement With a Single Case

The select statement lets a goroutine wait on multiple communication operations. But, if there is only a single operation/case, we don’t actually require select statement for that. A simple send or receive operation will help in that case. If we intend to handle the case to try a send or receive without blocking, it is recommended to add a default case to make the select statement non-blocking.

// Bad pattern
select {
case x := <-ch:
    fmt.Println(x)
}

// Recommended
x := <-ch
fmt.Println(x)

Using default:

select {
case x := <-ch:
    fmt.Println(x)
default:
    fmt.Println("default")
}

11. context.Context Should Be the First Param of The Function

The context.Context should be the first parameter, typically named ctx. ctx should be a (very) common argument for many functions in a Go code, and since it’s logically better to put the common arguments at the first or the last of the arguments list. Why? It helps us to remember to include that argument due to a uniform pattern of its usage. In Go, as the variadic variables may only be the last in the list of arguments, it is hence advised to keep context.Context as the first parameter. Various projects like even Node.js have some conventions like error first callback. Thus, it’s a convention that context.Context should always be the first parameter of a function.

// Bad practice
func badPatternFunc(k favContextKey, ctx context.Context) {    
    // do something
}

// Recommended
func goodPatternFunc(ctx context.Context, k favContextKey) {    
    // do something
}

When it comes to working in a team, reviewing other people’s code becomes important. DeepSource is an automated code review tool that manages the end-to-end code scanning process and automatically makes pull requests with fixes whenever new commits are pushed or new pull requests.

Setting up DeepSource for Go is extremely easy. As soon as you have it set up, an initial scan will be performed on your entire codebase, find scope for improvements, fix them, and open pull requests for those changes.

go build



Source link

r/webdev - Python or JS first ?
Strategy

Python or JS first ? : webdev


As i had seen in a lot of place, people are encouraging web dev learners to learn Html , CSS then Javascript, and that’s the learning path I’m following, but As i wanted to use the

CS50’s Web Programming with Python and JavaScript on edx , I found an interesting other learning path, which have python SQL and Django all before JS.

So what do you advise me? To continue in the “Classic learning path” or use the course’s learning path ?

r/webdev - Python or JS first ?



Source link

r/webdev - I have create a backend framwork with Symfony 5
Strategy

I have create a backend framwork with Symfony 5 : webdev


Hi,

I have developed two symfony bundles (using Symfony5 framework) that provide components and theme for Admin.

Unlike Sonata, or EasyAdmin bundle, there is no CRUD layer and you can create your admin page like you want.

The main component, DataTable uses the same principle as Symfony Form. (Type, Factory, Builder).

There is no documentation but you can see demo here and code on my github.

Give me your feedbacks !

r/webdev - I have create a backend framwork with Symfony 5

A view with Menu and DataTable component



Source link

Flow Designer, with Flow Created and Grey Indicator
Strategy

Extracting Reddit Data to Airtable Using Byteline’s No-Code …


Introduction

In the world of information exchange, nerds and nitwits love Reddit alike. The popularity of Reddit is not only because of being an infinite source of user-generated content for pretty much every field of human knowledge but also for the fact that Reddit manages spam, unlike any other social media channels.

The prominence of Reddit is apparent with its sheer number of users who not only access informative content by visiting the site but also extract that info to be used with another source. To do so, Reddit offers an API that can be used to pull subreddit data of posts, comments, media, and votes. Though it sounds like a great idea, often there are challenges to convert the API skeleton into a full-fledged data churning machine. This is usually because using the Reddit API requires decent technical development skills before actually starting to use it for data extraction.

In this article today, we are covering the solution of how Byteline’s no-code platform can be used to pull data through the Reddit API without writing a single line of code. While Byteline offers an out-of-the-box solution of extracting Reddit data to export into a wide list of applications, we are considering a use-case of importing the fetched data into an Airtable Base for this article.

For the uninitiated, Airtable is a popular no-code database that is steadily gaining popularity as a data organizing tool — simple like a spreadsheet, with features of a relational database.

Byteline offers a No-Code Airtable Upsert (Update and Insert) service which you may use to pull data from an external source, and then feed it into your Airtable base. It intelligently figures out whether to update or insert a record without requiring any user input. For items requiring an update, it automatically figures out the record ID. Compare this to a complicated no-code logic required in other no-code platforms to figure out the right Item ID to update. All this, without writing a single line of code!

To understand this better, let us simplify the sources involved in the workflow:

Data Source — Reddit.

Data Extraction and API Handling — Reddit API through Byteline’s No-Code Platform.

Data Publish Target – Airtable.

Here is the list of a few simple steps that you can use to extract data from subreddits, and then publish it on an Airtable base. 

Fetching Reddit Data

The first step to get started is to configure the API that will be used to pull data from Reddit. Byteline allows you to configure the Reddit API through its Flow Designer, using a simple user interface without writing a single line of code. To know more about the list of Byteline’s list of integrations and features, click here.

Quick Tip — Byteline’s Flow is a logical flow of steps with easy-to-use buttons for quick configuration.

Step 1:

Start with creating a flowing design that has the HTTP API as the first (trigger) node and Reddit node as the second. Here is detailed information on HTTP trigger nodes.

For more detail on creating Byteline’s Flow Designer, visit: How to Create your First Flow help doc.

Flow Designer, with Flow Created and Grey Indicator

Here the Grey indicator shows that the Reddit node is not configured, which we will configure in the steps to follow.

Step 2:

Click on the Edit button of the Reddit node for configuration.

Edit the Reddit Node

Step 3:

Click on the Sign-in button of the configuration window to connect your Reddit account with the node to fetch data. This will open the Reddit authentication page in another browser tab.

Sign In to Reddit

Step 4:

Once logged into your Reddit account, click on the Refresh button to update the connection status. Enter the Subreddit name you want to access in the text field of the configuration window as shown. 

Step 5:

Once done, click on the Save button to save the Reddit node’s configuration.

Save Configuration

The green indicators show the Reddit node is configured successfully.

Green Indicator Means Go

Step 6:

Hit the Deploy button at the top-left corner of the console to deploy the flow created.

Deploy in the Flow Designer

Step 7:

As the last step of this part of the flow, click on the Run button to execute the flow. Even though you will run your flow from different sources, it can be tested by a flow designer to make sure its functionality is working fine. With this, the Flow Designer is configured with the Reddit node to start extracting data through the HTTP API trigger.  

Run Button

Pushing Reddit Data to Airtable

Once the initial stage of Flow Designer is configured, the next stage is to push the extracted Reddit data to Airtable records. 

Quick Tip: Byteline offers a no-code integration solution with multiple platforms, including WebFlow, Airtable, etc.

Step 1:

Add Airtable node to the existing flow.

Adding Airtable to Flow

Step 2:

Once the Airtable node is added, the Grey indicator on the Airtable node shows that the node is not yet configured. To configure the node, hit the edit button of the Airtable -Upsert node for configuration.

Grey Indicator on Airtable Node

Step 3:

Enter the Base Id, Table name, and View in the text field of the Airtable configuration window.

Add/Update Airtable Records

 Quick Tip: To find details of your Airtable Base, please refer to the image below: 

Details of your Airtable Base

Step 4:

Once details of your Airtable Base have been entered, click on the loop over checkbox to apply the loop to a JSON array. Refer to our loop over documentation to understand this concept.

Step 5:

Click on Select Variable Tool to view the data model. 

Select Variable Tool

Step 6:

Click on the Grey button of an array to pick the path.

Quick Tip: This is an important step to note and select the variables that you want to pull into the workflow. Here is the documentation on understanding flow variables.

Grey Box to Select Path

 

Step 7:

Enter the JSON array path in the loop over the text field to execute the loop over the array. 

Loop Over Field

Step 8:

Enter the variable in the text field with the syntax @.data.created | datetimeHere, @ is used to fetch the current value of the variable and the | pipe converter is used to convert data to an acceptable format.

Field Values

Step 9:

Click on the Save button to save the Airtable configuration. Once done, the Green indicator shows that the Airtable node is configured successfully. You can now test this flow by deploying and running as we did in the previous section of configuring the Reddit node.

Nodes Configured

Step 10:

Before you can use the Airtable node, you need to configure the API Key to access your Airtable account. You can follow the steps here to do that. The connections page screenshot is below.

Connections Page

With this, your Byteline Flow Designer is now fully set up to fetch data from Reddit and add or update into Airtable Base records.

Conclusion

In this article today, we covered how one can use Byteline’s Reddit API to extract and feed into an Airtable base. While there might be numerous use-cases of doing so, the reason is always the same — which is to leverage Reddit’s vast amount of informational content.

While Reddit is known for its user-generated stories, Airtable is a unique platform that blends the features of a spreadsheet and database. To connect both of these together, Byteline through its no-code platform helps you generate an extract-to-load workflow in a few simple steps without writing a single line of code.

Try it once, and let us know if you have any feedback.


This article was originally published on https://www.byteline.io/blog/extracting-reddit-data-to-airtable and has been authorized by Byteline for a republish.



Source link

HIPAA Compliance
Strategy

How To Develop a HIPAA Compliant m-Health App


Introduction

Have you dealt with the healthcare industry?

Well, surely you might have and would have also heard about HIPAA compliance. If not, let’s understand what HIPAA means before we move ahead on how to develop a HIPAA Compliant m-Health App.

HIPAA stands for the Health Insurance Portability and Accountability Act, which protects the privacy of medical records and personal health information of individuals. It applies to healthcare providers such as doctors, dentists, and pharmacies. HIPAA also covers health insurance companies, government programs, and HMOs.

Now, the question arises: do my m-health app need to be HIPAA compliant?

The straight answer to this question is yes! m-Health apps also come under HIPAA compliance since it collects and stores personal health information of the user and shares it with entities dealing in healthcare services like those mentioned above.

The biggest reason behind the compliance is the intent to protect the privacy of patients. The data breaches in the healthcare industry have already posed a lot many issues on the financial front. According to IBM’s report, “The data breach hit hard in 2020, costing $7.13 million annually, where 80% of the information resulted in the exposure of personal information of the customers.”

Thus, healthcare organizations need to develop HIPAA compliant apps to enhance security and protect customer’s personal information.

The best way is to hire health tech software developers to develop a HIPAA compliant app for your business. They help you build a HIPAA compliant healthcare app that will streamline all the administrative healthcare functions, improve efficiency, and ensure that the PHI is shared safely.

However, if you are planning to build one, scroll down to know more about developing a HIPAA compliant healthcare app!

Four Crucial Rules To Develop a HIPAA Compliant m-Health App

HIPAA Compliance

You need to follow the four most important rules to make a HIPAA-compliant m-Health app.

1. Privacy Rule

The privacy rule mandates the protection and privacy of all health information that is individually identifiable. It sets rules to control and protect health information in any form or medium.

2. Security Rule

The security rule is concerned with the security of electronic medical records (EMR) and addresses the issues related to the technical aspect of protecting electronic health information. It considers security at three levels that include:

  • Administrative security: Here the responsibility of securing the information lies on an individual.
  • Physical security: It is concerned with providing security to electronic systems, equipment, and data.
  • Technical security: It is concerned with authentication and encryption used to control access to data.

3. Enforcement Rule

The HIPAA enforcement rule comes from the HITECH act that expands the scope of HIPAA rules related to the privacy and security of individual data. It further contains the penalties and increased reach for the violation of HIPAA rules.

4. Breach Notification Rule

The HIPAA breach notification rule also comes from the HITECH act that requires entities and their business associates to report breaches of PHI to affected individuals, HHS, and media within 60 days of breach discovery.

What Is the Significance of These Rules for the M-Health App Developers?

All of these rules are of great importance to the developers, as they are concerned with safeguarding the technical and physical information related to customers and organizations involved. Here, physical safeguards include the protection of the backend, data transfer networks, and user devices like iPhones or any other devices on iOS or Android. These could be stolen, compromised, or lost by accident.

Apart from this, the developers need to ensure the app’s security by enforcing regular authentication to enhance safety without compromising on user-friendliness. You can allow fingerprint authentication for the users that will be easy for them and will also protect the information in case the device is stolen or lost.

However, the user shouldn’t store any PHI on the memory cardm, as they are vulnerable to security risks due to the lack of strong access permission. To make an app fully compliant with HIPAA, you need to ensure that the data is fully encrypted so it cannot be accessed easily by anyone in case the device is lost or stolen.

It comes under the technical aspects, where the developer focuses on encrypting the data stored in the device by considering the following:

  • Unique user identification
  • Emergency access procedure
  • Encryption and Automatic logoff

Another important thing that you must keep in mind is to never send PHI data in push notifications and leak it on backups and logs. This brings us to the must-have features of a HIPAA compliant app that we have discussed in the section below.

Let’s have a look!

Must-Have Features of a HIPAA Compliant m-Heath App

Must have features of a HIPAA Compliant m-Health App

When it comes to developing HIPAA Compliant m-Health App, there are a few common features that we have already pointed out in the section above. Here are the must-have features:

1. User Identification

As we discussed the authentication of users above, you can introduce a PIN, password, or level it up by implementing a biometric identification like a figure print or smart card.

2. Emergency Access

At times of natural emergencies, essential services usually face disruptions, make sure you implement a solution to address the issue of emergency access.

3. Encryption

Encryption is the most crucial need for protecting the PHI data, which is stored on the device or being transmitted. However, when you use services like Google Cloud or AWS, you get the end to end encryption as it runs a transport layer security 1.2.

Apart from this, automatic logoff is crucial from the perspective of protecting the data from being stolen in case the user has lost the device.

How To Develop a HIPAA Compliant m-Health App

How to develop a HIPAA compliant m-health app

Hire Dedicated Healthcare Developers

The very first step to developing HIPAA Compliant m-Health App is to hire dedicated healthcare developers with relevant experience who can help audit your system. Avoid taking help from freelancers as they may not have all the resources when it comes to developing such an app.

Evaluate Patient Data and Eliminate Risks Involved

After you consult and hire dedicated healthcare developers, move ahead with evaluating the patient’s data and find out what comes under PHI. After you identify the PHI data, analyze what you can avoid storing on the mobile app.

This way, you can store only the relevant information thereby saving yourself from leaking anything unnecessarily. Also, write a clear privacy policy so you can adhere to the industry standards.

Encrypt the Data

Now comes the time to encrypt the data after you have figured out crucial information to be stored or transmitted through the device. However, we have already discussed the need for it in the must-have features section above and you have to consider it. Use App transport security that will link mobile apps to back-end servers on HTTPS to encrypt the PHI data. It will help prevent man-in-the-middle attacks. Moreover, the data is stored in hash values that further safeguard it from any attack.

Strengthen the Environment

When it comes to maintaining the safety and security of the app, don’t send a push notification that contains PHI as they are not safe. Make sure that the local session of the app should be timed out after a specific period. The user must make sure to isolate the app that contains all the crucial data from other apps on the smartphone. In case if the user is using an iOS, make sure to store your encryption keys for which they can employ a protected enclave.

Resort to Security Testing

After you have made sure that the environment for the HIPAA compliant app is apt, move forward with security testing. You can carry out static as well as dynamic application tests to ensure security. Resort to a third-party audit and get it checked by a HIPAA expert that will get through all the documentation. The expert may conduct a few penetrations tests to spot the vulnerabilities.

So, this was about HIPAA compliant apps that will soon be the prime demand, owing to the deep impact that the coronavirus pandemic has left on the world. Thus, more and more people will be resorting to digital apps and the companies developing these apps will have to focus on compliance adherence.

So, when you hire dedicated healthcare developers, make sure they understand the nuances of HIPAA compliance well and implement them in the app.



Source link

Chrome version relaunch after installation
Strategy

Test Automation Using Selenium ChromeDriver


Introduction

As per the browser market share, Google Chrome is said to be the most used cross-platform browser in the world. Every new chrome version comes up with an exciting feature that hikes the importance and usage of the chrome browser. Hence, it becomes essential to test our web application on such a high-rated browser.

Performing different test cases manually on different chrome versions can be hectic and challenging. To overcome this challenge, it is necessary to perform test automation on the chrome browser.

Selenium is an open-source project offering a variety of tools and libraries for web browser automation. It is primarily used to write scripts to automate the end-user interactions and to test site functionality in a much faster way.

Chrome officially provides an OS-dependent driver which establishes a connection between Selenium WebDriver and Google Chrome browser. Once the connection gets established, we are good to go with selenium tests on the chrome browser.

What Is Selenium ChromeDriver?

ChromeDriver is a standalone server that develops a communication medium with Selenium WebDriver Chromium Protocol. The two protocols that are being used by Selenium ChromeDriver to interact with the browser are JsonWireProtocol and W3C Protocol. These two protocols are responsible for translating the Selenium-based commands into the corresponding actions on the chrome browser.

The primary purpose of Selenium ChromeDriver is to launch the browser and perform the desired automated operations. The

 ChromeDriver now comes up with different capabilities, for example: running tests in incognito mode, headless mode, disable extensions and pop-ups, etc.

The general syntax to setup Selenium ChromeDriver is:

In the above syntax, WebDriver is an interface that is being extended by ChromeDriver class, hence, all the methods which are declared in the WebDriver interface are implemented by the respective driver class.

ChromeDriver Installation

To run our Selenium tests on Chrome browser, it is important to have a ChromeDriver executable file in our Selenium project. To download the respective ChromeDriver, we need to check the current chrome version installed in our testing machine. According to the chrome browser version, we need to download the compatible ChromeDriver. Here is the link to download the ChromeDriver for your Selenium project.

As per the below screenshot, the Chrome version installed in the testing machine is 88.0.4324.96, hence, the ChromeDriver to be installed must be of version 88.0.4324.96 as well.

Chrome version relaunch after installation

Index of Chrome Drivers

To run our Selenium tests on chrome browser, it is important to have a ChromeDriver executable file in our Selenium project. To download the respective ChromeDriver, we need to check the current chrome version installed in our testing machine. According to the chrome browser version, we need to download the compatible ChromeDriver. Here is the link to download the ChromeDriverfor your Selenium project.

As per the below screenshot, the chrome version installed in the testing machine is 88.0.4324.96, hence, the ChromeDriver to be installed must be of version 88.0.4324.96 as well.

Note: Since the ChromeDriver is OS-dependent, you will need to make sure you download the chromedriver version that is OS-compatible.

Setting Up Maven Project To Run Our First Selenium Based Chrome Test

There are a few prerequisites that need to be taken care of before running your first Selenium test:

  1. Java–JDK/JRE
  2. Eclipse or Intellij IDE
  3. Maven

To avoid downloading any dependency, it is a good practice to create a maven project which allows you to directly add all dependencies in a pom.xml file.

Now since we have downloaded the ChromeDriver, we just need to add two dependencies in our pom file.

1. Selenium Dependency

2. TestNG Dependency

For example, let’s try to launch a pCloudy login page via Google Search.

Code Walkthrough

In the above example, we have demonstrated a TestNG framework where we have used BeforeTest annotation to launch and maximize the browser before the actual test begins. To launch the browser, we have used the setProperty() method to specify the location of the ChromeDriver executable file that we downloaded according to the chrome version installed in the system.

The next command is the primary command that launches the web browser (syntax explained above). In the main test method, we have Google searched the pCloudy keyword and have directed to the pCloudy login page. And at the end, we have used AfterTest annotation to quit the chrome browser once the test method execution gets completed.

Important Note: Instead of specifying the ChromeDriver path in every Selenium project using the setProperty() method, we can also save the ChromeDriver path in environment variables.

Use Of ChromeOptions Class

ChromeOptions class provides some advanced arguments to manipulate Selenium ChromeDriver properties. This class can also be used in conjunction with Desired Capabilities. Below are the lists of operations that are commonly used with ChromeOptions class:

  • Start-maximized: Opens chrome browser in maximized mode
  • HeadlessLaunch chrome in headless mode i.e. without GUI
  • Incognito: Launch chrome browser in incognito mode
  • Version: Displays/print chrome browser version
  • Disable -extensions: Disable existing chrome extensions
  • Disable-popup-blocking: Disable pop-ups being displayed on chrome

The General Syntax To Declare Chromeoptions Class

Now since we have understood the basic usage of ChromeOptions class, let’s try to understand this better with some of the above-defined operations:

Code Walkthrough

Here we have used the same test method as taken in the previous example. The only difference that we have initiated here is the use of ChromeOptions class. With the above example, we are running our test script in incognito mode with a maximized browser window. The operations that we have declared i.e. incognito and start-maximized have added arguments to the ChromeOptions object. The ChromeOptions object is then passed in the default constructor of ChromeDriver.

Here is a list of a few other advanced methods provided by ChromeOptions Class to set up the Selenium ChromeDriver properties; let’s have a quick look at those as well:

1. To add a new extension

This method is to add an extension to the chrome browser while running your automation test. All extensions are stored in the system with the .crx extension.

2. To add a new binary path

This method is used to specify the binary file path. The binary file path can be of chrome binary or any other binary being used in automated tests.

3. To accept an untrusted certificate

This method allows the chrome browser to accept insecure website certificates.

Manage Chrome Binary With WebDriverManager

As discussed above, we always have to download relevant compatible ChromeDriver versions to run our automated scripts on the chrome browser. In a case where your chrome browser gets updated to a newer version, then downloading the new compatible ChromeDriver would be mandatory. These steps become cumbersome as the chrome versions keep on updating.

To overcome this phase, we have another open-source project named “WebDriverManager” that automates the management of different browser drivers.

Importing WebDriverManager in our project avoids the explicit downloading of browser drivers and thus avoids the use of the setProperty() method to specify the browser driver path.

To import WebDriverManager in your maven-selenium project, you need to add its maven dependency in the pom.xml file:

Now since we have added WebDriverManager maven dependency, let’s look at the general syntax that is being used to instantiate a browser using WebDriverManager in Selenium.

With the above syntax, WebDriverManager does the magic for you:

  • It checks the browser version installed in your machine (e.g. Chrome, Firefox).
  • It matches the version of the driver (e.g. ChromeDriver, GeckoDriver).
  • If an unknown version is found, it uses the latest version of the driver. It downloads the driver if it is not present on the WebDriverManager cache (~/.cache/selenium by default).
  • It exports the Selenium required WebDriver Java environment variables.

Let’s have a look at a practical example using WebDriverManager:

Console Output

Console Output

For more details on WebDriverManager, please have a look at its official GitHub repository.

Conclusion

Google Chrome is known to be one of the most popular browsers in the market, and hence the need to automate the browser testing of your web app on the Chrome browser becomes absolutely crucial. Thankfully, using the ChromeDriver we can access the browser with ease to perform the Selenium test automation



Source link

Image Fragmentation Effect With CSS Masks and Custom Propert...
Strategy

Image Fragmentation Effect With CSS Masks and Custom Propert…


Geoff shared this idea of a checkerboard where the tiles disappear one-by-one to reveal an image. In it, an element has a background image, then a CSS Grid layout holds the “tiles” that go from a filled background color to transparent, revealing the image. A light touch of SCSS staggers the animation.

I have a similar idea, but with a different approach. Instead of revealing the image, let’s start with it fully revealed, then let it disappear one tile at a time, as if it’s floating away in tiny fragments.

Here’s a working demo of the result. No JavaScript handling, no SVG trickery. Only a single <img> and some SCSS magic.

Cool, right? Sure, but here’s the rub. You’re going to have to view this in Chrome, Edge or Opera because those are the only browsers with support for @property at the moment and that’s a key component to this idea. We won’t let that stop us because this is a great opportunity to get our hands wet with cool CSS features, like masks and animating linear gradients with the help of @property.

Masking things

Masking is sometimes hard to conceptualize and often gets confused with clipping. The bottom line: masks are images. When an image is applied as mask to an element, any transparent parts of the image allow us see right through the element. Any opaque parts will make the element fully visible.

Masks work the same way as opacity, but on different portions of the same element. That’s different from clipping, which is a path where everything outside the path is simply hidden. The advantages of masking is that we can have as many mask layers as we want on the same element — similar to how we can chain multiple images on background-image.

And since masks are images, we get to use CSS gradients to make them. Let’s take an easy example to better understand the trick.

img {
  mask:
    linear-gradient(rgba(0,0,0,0.8) 0 0) left,  /* 1 */
    linear-gradient(rgba(0,0,0,0.5) 0 0) right; /* 2 */
  mask-size: 50% 100%;
  mask-repeat: no-repeat;
}

Here, we’re defining two mask layers on an image. They are both a solid color but the alpha transparency values are different. The above syntax may look strange but it’s a simplified way of writing linear-gradient(rgba(0,0,0,0.8), rgba(0,0,0,0.8)).

It’s worth noting that the color we use is irrelevant since the default mask-mode is alpha. The alpha value is the only relevant thing. Our gradient can be linear-gradient(rgba(X,Y,Z,0.8) 0 0) where X, Y and Z are random values.

Each mask layer is equal to 50% 100% (or half width and full height of the image). One mask covers the left and the other covers the right. At the end, we have two non-overlapping masks covering the whole area of the image and, as we discussed earlier, each one has a differently defined alpha transparency value.

We’re looking at two mask layers created with two linear gradients. The first gradient, left, has an alpha value of 0.8. The second gradient, right, has an alpha value of 0.5. The first gradient is more opaque meaning more of the image shows through. The second gradient is more transparent meaning more of the of background shows through.

Animating linear gradients

What we want to do is apply an animation to the linear gradient alpha values of our mask to create a transparency animation. Later on, we’ll make these into asynchronous animations that will create the fragmentation effect.

Animating gradients is something we’ve been unable to do in CSS. That is, until we got limited support for @property. Jhey Tompkins did a deep dive into the awesome animating powers of @property, demonstrating how it can be used to transition gradients. Again, you’ll want to view this in Chrome or another Blink-powered browser:

In short, @property lets us create custom CSS properties where we’re able to define the syntax by specifying a type. Let’s create two properties, --c-0 and--c-1 , that take a number with an initial value of 1.

@property --c-0 {
   syntax: "<number>";
   initial-value: 1;
   inherits: false;
}
@property --c-1 {
   syntax: "<number>";
   initial-value: 1;
   inherits: false;
}

Those properties are going to represent the alpha values in our CSS mask. And since they both default to fully opaque (i.e. 1 ), the entire image shows through the mask. Here’s how we can rewrite the mask using the custom properties:

/* Omitting the @property blocks above for brevity */

img {
  mask:
    linear-gradient(rgba(0,0,0,var(--c-0)) 0 0) left,  /* 1 */
    linear-gradient(rgba(0,0,0,var(--c-1)) 0 0) right; /* 2 */
  mask-size: 50% 100%;
  mask-repeat: no-repeat;
  transition: --c-0 0.5s, --c-1 0.3s 0.4s;
}

img:hover {
  --c-0:0;
  --c-1:0;
}

All we’re doing here is applying a different transition duration and delay for each custom variable. Go ahead and hover the image. The first gradient of the mask will fade out to an alpha value of 0 to make the image totally see through, followed but the second gradient.

More masking!

So far, we’ve only been working with two linear gradients on our mask and two custom properties. To create a tiling or fragmentation effect, we’ll need lots more tiles, and that means lots more gradients and a lot of custom properties!

SCSS makes this a fairly trivial task, so that’s what we’re turning to for writing styles from here on out. As we saw in the first example, we have a kind of matrix of tiles. We can think of those as rows and columns, so let’s define two SCSS variables, $x and $y to represent them.

Custom properties

We’re going to need @property definitions for each one. No one wants to write all those out by hand, though, so let’s allow SCSS do the heavy lifting for us by running our properties through a loop:

@for $i from 0 through ($x - 1) {
  @for $j from 0 through ($y - 1) {
    @property --c-#{$i}-#{$j} {
      syntax: "<number>";
      initial-value: 1;
      inherits: false;
    }
  }
}

Then we make all of them go to 0 on hover:

img:hover {
  @for $i from 0 through ($x - 1) {
    @for $j from 0 through ($y - 1) {
      --c-#{$i}-#{$j}: 0;
    }
  }
}

Gradients

We’re going to write a @mixin that generates them for us:

@mixin image() {
  $all_t: (); // Transition
  $all_m: (); // Mask
  @for $i from 0 through ($x - 1) {
    @for $j from 0 through ($y - 1) {
      $all_t: append($all_t, --c-#{$i}-#{$j} transition($i,$j), comma);
      $all_m: append($all_m, linear-gradient(rgba(0,0,0,var(--c-#{$i}-#{$j})) 0 0) calc(#{$i}*100%/(#{$x} - 1)) calc(#{$j}*100%/(#{$y} - 1)), comma);
    }
  }
  transition: $all_t;
  mask: $all_m;
}

All our mask layers equally-sized, so we only need one property for this, relying on the $x and $y variables and calc():

mask-size: calc(100%/#{$x}) calc(100%/#{$y})

You may have noticed this line as well:

$all_t: append($all_t, --c-#{$i}-#{$j} transition($i,$j), comma);

Within the same mixing, we’re also generating the transition property that contains all the previously defined custom properties.

Finally, we generate a different duration/delay for each property, thanks to the random() function in SCSS.

@function transition($i,$j) {
  @return $s*random()+s $s*random()+s;
}

Now all we have to do is to adjust the $x and $y variables to control the granularity of our fragmentation.

Playing with the animations

We can also change the random configuration to consider different kind of animations.

In the code above, I defined the transition() function like below:

// Uncomment one to use it
@function transition($i,$j) {
  // @return (($s*($i+$j))/($x+$y))+s (($s*($i+$j))/($x+$y))+s; /* diagonal */
  // @return (($s*$i)/$x)+s (($s*$j)/$y)+s; /* left to right */
  // @return (($s*$j)/$y)+s (($s*$i)/$x)+s; /* top to bottom */
  // @return  ($s*random())+s (($s*$j)/$y)+s; /* top to bottom random */
  @return  ($s*random())+s (($s*$i)/$y)+s; /* left to right random */
  // @return  ($s*random())+s (($s*($i+$j))/($x+$y))+s; /* diagonal random */
  // @return ($s*random())+s ($s*random())+s; /* full random*/
}

By adjusting the formula, we can get different kinds of animation. Simply uncomment the one you want to use. This list is non-exhaustive — we can have any combination by considering more forumlas. (I’ll let you imagine what’s possible if we add advanced math functions, like sin(), sqrt(), etc.)

Playing with the gradients

We can still play around with our code by adjusting the gradient so that, instead of animating the alpha value, we animate the color stops. Our gradient will look like this:

linear-gradient(white var(--c-#{$i}-#{$j}),transparent 0)

Then we animate the variable from 100% to 0%. And, hey, we don’t have to stick with linear gradients. Why not radial?

Like the transition, we can define any kind of gradient we want — the combinations are infinite!

Playing with the overlap

Let’s introduce another variable to control the overlap between our gradient masks. This variable will set the mask-size like this:

calc(#{$o}*100%/#{$x}) calc(#{$o}*100%/#{$y})

There is no overlap if it’s equal to 1. If it’s bigger, then we do get an overlap. This allows us to make even more kinds of animations:

That’s it!

All we have to do is to find the perfect combination between variables and formulas to create astonishing and crazy image fragmentation effects.



Source link

r/web_design - Is it me or the website of Twitter is very bad?
Strategy

Is it me or the website of Twitter is very bad? : web_design


Hey guys..

I was not a twitter user and regarding the webdev world I knew twitter because of Bootstrap..

But heck, I just made a Twitter account, and all the entire process was a crap regarding UI and even UX!

I found that so bad I was wondering if it’s not my web browser that didn’t apply the CSS styles (which I don’t think – I’m using Vivaldi which is chrome based)

Here for example there’s so many things wrong even on the main page (the alignments, typos, etc…)

r/web_design - Is it me or the website of Twitter is very bad?

I can’t believe that it’s the real twitter and not just a student try-out that wants to ask for improvements..

But really the forms were even worse..



Source link

r/graphic_design - [OC] Practicing in Illustrator by designing some merch and title cards for my D&D podcast.
Strategy

[OC] Practicing in Illustrator by designing some merch and t…


r/graphic_design - [OC] Practicing in Illustrator by designing some merch and title cards for my D&D podcast.

Title card for the podcast

Image description: a truncated purple d20 die grows out of a stalk, much like a mushroom, on a black field below white text that says “Polyvox: an Anthrogang Production”

r/graphic_design - [OC] Practicing in Illustrator by designing some merch and title cards for my D&D podcast.

Experimenting with mushroom serif-like designs

Image description: white text in Royal Signage font that says “No thoughts; only Spores.” The lettering has mushrooms growing out of it, and is placed on a stylized purple spore print (Psilocybe semilanceata) on a black field.

r/graphic_design - [OC] Practicing in Illustrator by designing some merch and title cards for my D&D podcast.

A design for a tee shirt.

Image description: a brown mannequin with green eyes hanging by green vines, centered on a purple circle on a black field between two lines of Royal Signage text that read “I survived the Gateway Grove; and all I got was this creepy puppet.”

r/graphic_design - [OC] Practicing in Illustrator by designing some merch and title cards for my D&D podcast.

A magic item in the campaign.

Image description: an amber ellipse surrounded by concentric silver-grey ellipses. Below the assemblage there is a circle with a runic inscription, and above there is a cutout for a loop of chain to go through; this is a pendant for a necklace. Suspended in the amber is a stylized fossil of a worker leafcutter ant.



Source link