Number of COBOL searches
Strategy

I Took a COBOL Course and I Liked It


COBOL is in the news again. Millions of people are filing unemployment claims nearly all at once, and the systems to process them are failing. Why? They need to scale to unprecedented levels — they’re written in COBOL, and… we don’t have enough COBOL programmers.

Here’s a look at the increase in searches for “COBOL programmers”:

Number of COBOL searches

Most COBOL programmers are retired. The pipeline of new COBOL programmers is nearly nonexistent. Many are coming out of retirement just to help.

This piqued my curiosity. I know nothing about COBOL, other than it’s old, still used a lot, and it was created by one of my heroes, Admiral Grace Hopper. I looked to see if Pluralsight had a course about it, and they do. You can take a COBOL course right now. It’s 100% free in April, and I recommend you check it out.

Why Did I Take a COBOL Course?

Punch tape
So, after reading about this problem, I started thinking about COBOL again and got curious. Do I want to get a job as a COBOL programmer? Am I going to use this? No. As much as I love to roll up my sleeves and help in a crisis, I love my job and where I’m at. I don’t want to be a COBOL programmer. I’m just curious. I had to know how this decades-old language really works.

So I took this course and found out. Here’s what I learned.

How COBOL Programs Work

As I started to dig in, I immediately started hearing terms and acronyms I am not familiar with. This must be how a non-technical person feels when they’re trying to understand new technology. It’s an eye-opener. There’s a big COBOL world that exists, and I am unaware of it. I forged on.

COBOL programs are text (ok, that I can relate to) and they are divided into four main divisions:

  • Identification Division
  • Environment Division
  • Data Division
  • Procedure Division

As a seasoned developer, I can probably assume what most of these things are, but I kept going to see what I’d learn. My first impression was: this is a great way to organize applications. We do this in some form with many applications, but these hard fast rules for organization? I like it.

Program division in COBOL

Identification Division

This provides identifying information, such as the name of the program, who wrote it, and the date they compiled it. We see things like this in comments and package files these days.

Environment Division

This area tells what kind of system it’s being built on, what compiler is being used, and similar information. It’s not declarative but gives a programmer a general idea of the environment to build it in.

Data Division

This is for defining file fields. What type of data will come in? How is it defined? This can be internal fields such as accumulators, flags, and counters, or external fields such as parameters passed to your program.

Procedure Division

This is where the action happens. Think statements (sentences in COBOL), methods, computations, all the “meat” of the program lives here. This is the largest division of a COBOL program.

This kind of organization is great. It makes sense. We have this in nearly every language we develop with today. You punch in text that’s compiled to an executable run by the computer. That’s where the similarities end.

How Is COBOL Used?

Implied signs in COBOL

Cobol reads files, performs actions on the data, and creates reports, or stores the new data. It’s suited for large-scale transaction processing. Think about something like calculating sales numbers for 5 million transactions. It’s primarily used on mainframe computers for finance and administrative systems.

According to this survey, 25,757 companies use COBOL. More than half of the companies I’ve worked for in my career used it somewhere. It’s out there everywhere, quietly processing our transactions every day.

Why Is It Still Around?

Grace Hopper in lab

My guess is: because it works. Folks love to make jokes about COBOL, but this is a language that strives for reliability. It’s happy when you feed it large amounts of data. It’s used to process millions of records if needed, and “just works” day after day.

It’s primarily used in finance and government because organizations invested a lot of time and money to build COBOL systems, and these applications continue to chug away and do their job decades later. COBOL has a focus on performance and backward compatibility like no other.

If it isn’t broke, don’t fix it. The biggest weakness of COBOL seems to be the number of people who know it, not the language itself.

Similarities to Modern Development

COBOL program

As I mentioned, you write code in text to .SOURCE file and that gets compiled into a .LOADLIB file. Then, those files are run by .JCL files. There are things like compilers and linkers and other stuff that seems familiar.

There are a few other aspects of COBOL that will be familiar to modern developers:

Output for Debugging — COBOL has error codes and other output. You can pipe them to SYSOUT, or a file. You can make it verbose to see what’s happening for debugging purposes, like any other language.

Reusable code — COBOL has functions, which are crucial for abstracting things, and ways of creating reusable code in COBOL. It’s even object-oriented if that’s your thing. You can build libraries to reuse.

Types are important — Type definition, conversions, and all those other headaches are present in COBOL. Types are important, but data formatting is even more important. You really have to get this right in COBOL.

Naming is important — You have a lot of freedom when naming things in COBOL, and how you do it is important. There seems to be an emphasis on the idea that “this program will grow large and be maintained for decades”, which is something you don’t think about as much when writing an Angular app.

The period is your semicolon — In COBOL, everything, and I mean everything, is terminated with a period. This is the idea of “Sentences” in COBOL. The language strives to be as humanly readable as possible.

COBOL has familiar control structures — Not surprisingly, you have many loops, if statements, and other control structures you recognize from whatever programming language you’re using today.

I found jumping into COBOL, it’s easy to understand the basics of how to get rolling, based on my experience with other languages. You can get a general idea of how it works, but it’s not completely foreign. Until you start working with the code.

Differences from Modern Development

Once you get started working with COBOL, you’ll notice some big differences between this and casual JavaScript coding.

You need a mainframe — To truly develop COBOL, you need a mainframe. I’ve found some simulators online, and GnuCOBOL is pretty cool, but the real deal lives on mainframes, so you’ll need access to one for true COBOL development.

Human readable — Everything is very human-readable, though the code is in all caps, so it’s yelling. Statements are called “sentences” that are punctuated by a period. The period is small, so it’s tough to find where you forgot to add it.

Rigid Syntax — COBOL earned its reputation for rigid syntax. Spaces at the beginning and end of every line (like bookends) and every single space and character matters. You need an eagle eye for coding with this language.

Wild constraints — There are constraints in place in 2020 that shouldn’t exist, but COBOL is dedicated to backward compatibility. A line of COBOL code is always 80 characters (unless it isn’t), and reports are 132 characters wide. Why? Because of punch cards, and old school impact line printers. Yes, really.

Resource stinginess is built-in — You can really see how they designed the language around conserving resources. Memory, hard drive space, and CPU cycles are scarce so use them wisely. It’s easy for modern developers to forget how expensive and precious these things once were (and we should treat it that way now).

You must be very explicit — There isn’t much “loosey goosey” programming happening in COBOL, it won’t allow it. You have to be intentional in everything you do. It’s very unforgiving, and that’s a good thing.

Jobs are a big focus of the language — It’s clear that COBOL is modeled around doing “jobs”. The programs are not real-time interactive programs like many applications; they’re designed to run at specified times, and the code is run from top to bottom. It’s meant to read in data, do things with that data, then spit it out.

What I Got out of This Course

After taking this course, I developed a respect for the language. COBOL gets a bad rap mostly because of its age and the Y2K bug (which wasn’t the fault of COBOL), but it’s still around for a reason: it was designed to be rock solid from the start. It’s not without its faults, but nobody can argue the durability of a system that runs for decades.

I feel comfortable enough with COBOL to have a general understanding of how it works. I wouldn’t hire me as a COBOL programmer, but I’m confident this course would put me on that path if I so desired. The author did a fantastic job of explaining the concepts, being funny/not boring, and explaining best practices and pitfalls very well.

If you’re a developer in 2020, you could learn a lot from this course. You can learn a few principles (that I made up myself) that seem to be prevalent in the world of COBOL:

  • Preserve your resources — Memory, disk space, and CPU cycles are not free. Conserve them as much as possible with what you build.
  • Be explicit in everything you do — Take the time to figure out what you need and then declare it. Think arrays and structs instead of lists and generics. Performance comes from knowing exactly what you need and when.
  • Write code as if it will live for decades — For COBOL programmers, it’s true. Think ahead and act as if your code will live on for years. How should you write it so it can be maintained further down the line?
  • Avoid breaking changes — Modern developers love reinventing the wheel. We also like introducing breaking changes and telling people to upgrade if they don’t like it. COBOL takes the approach to only break things when there is no other option. We need a little more of that in 2020.

You should take Getting Started with Mainframe COBOL. You’d be surprised what you’ll learn. If you do, let me know what you think of it! I’d love to chat about it.



Source link

ringcentral
Strategy

Building a Desktop App With the RingCentral Embeddable and E…


ringcentral

Recently we released the RingCentral Phone for Linux (Community) which is built with the RingCentral Embeddable and Electron library. In this article, we will show you how to turn a web app into a desktop app with Electron, and also show you how to integrate RingCentral Embeddable into a desktop app.

RingCentral Embeddable is a web widget that we provide for developers to integrate the RingCentral service into their web app. But can we also use the RingCentral Embeddable to build an integration for a desktop app? The solution is the Electron library. Electron is a library that helps developers to build a cross-platform desktop app with web technologies like HTML and Javascript.

Prerequisites

  1. RingCentral Embeddable
  2. Electron.js
  3. NPM or Yarn (We assume you have installed node.js > 8)

Create Your First Electron Project

To create your first Electro project go here.

Init project:

Add start script in ‘package.json’:

Create blank files: main.js and app.html

We will have these three files:

Create blank files: main.js and app.html

We will have these three files:

Create your first BrowserWindow:

Start your app:

$ yarn start

Now you have your first app running on the desktop.

Before we continue, we need to understand the main and renderer processes in Electron. The main.js which manages the desktop app runs in the main process and each BrowserWindow instance is run in its renderer process. The main process is allowed to access the native app API, such as the file system. So HTML files and JS are run on the renderer process — they can only access the Web API that we use in the Chrome Browser.

RingCentral

Load RingCentral Embeddable

The RingCentral Embeddable is a web widget hosted on our Github Page or our CDN. You will load the Embeddable widget with ‘webview’ component. You could also load it by ‘BrowserWindow’ directly, but with the ‘webview’ component, we can have a more customized UI.

In app.html:

We now need to set the “Content-Security-Policy”, so we can load the resources from the web page security. Then we need thepartition for webview component — we need this for persistent data storage from the web page.

Use Preload to Run Your JS With RingCentral Embeddable

After loading the RingCentral Embeddable, we need to interact with it. The RingCentral Embeddable is designed for a web app, so how can we make it interact with our desktop app?

Electron provides a preload API, we can use this API to insert our customized JS into the web page when it loads. This will let us hack the web to interact with our main process.

Add the preload option into the webview component:

Create preload.js in the project root folder:

Preload.js redirects messages between RingCentral Embeddable and the main process. For example, when the RingCentral Embeddable gets an incoming call, we can send a message to the main process, so that the main process can make a minimized window show up at the top of the window.

For more information on the API and events from the RingCentral Embeddable go here.

Package Your App With Electron Builder

After we finish the development of the electron app, we need to package our app so users can install it easily, such as dmg for macOS, deb for Debian and Ubuntu, exec for Windows.

We can use electron-builder to package the app, it supports to package the app for macOS, Windows, and Linux.

$ yarn add electron-builder --dev

Create ‘electron-builder.yml‘:

Add scripts command into ‘package.json’:

To package for all platforms:

$ yarn package-all

If you add Github information into the ‘electron-builder.yml‘ publish section. It can help you create a release tag and upload install files into the Github releases. This will allow the user to download it from your Github repo releases.

More Information

You can get the full source code here. We have also packaged this app for Linux users, you can go here to download and try it out.

Hopefully, this article was helpful. Please let us know what you think by leaving your questions and comments below!m



Source link

Example to-do app
Strategy

State Transitions With Web Components


Web user interfaces have become much more complex than they were a couple of years ago. Complex SPAs using multiple vendor widgets needing to work with open source widgets etc., are taxing developers and taking a toll on their productivity. There is a need for a development paradigm that frees developers from this grinding exercise and makes developing web applications a fun experience. Enter Web Components. Web Components is a w3c specification. It helps developing web UI applications in a modular way.

In this article, I explore the use of web components for a non-trivial web UI application, like a To-Do web application. Additionally, I’ll also use a state machine so the resulting application is more robust than otherwise. In a previous article, I presented a state machine based web UI development approach using vanilla JavaScript. It was shown that the resulting application was very modular. The modular nature of the approach naturally leads us to use web components.

State Transitions

The approach proposed here suggests we first write a set of state transitions for our UI application. So for the To-Do application which has a screen mock-up like:

Example to-do app

I assume that the following are the required state transitions.

Initials State Pre-Event Processor Post-Event Final State
unknownState onload processOnload() onloadSuccess readyForAdd
readyForAdd addTodo processAddTodo() addTodoSuccessNoneSelected readyForAddSelect
readyForAddSelect addTodo processAddTodo() addTodoSuccessNoneSelected readyForAddSelect
readyForAddSelect changeTodo processChangeTodo() changeTodoSuccessSomeSelected readyForAddSelectUnselectDelete
readyForAddSelect changeTodo processChangeTodo() changeTodoSuccessAllSelected readyForAddUnselectDelete
readyForAddUnselectDelete addTodo processAddTodo() addTodoSuccessSomeSelected readyForAddSelectUnselectDelete
readyForAddUnselectDelete changeTodo processchangeTodo() changeTodoSuccessNoneSelected readyForAddSelect
readyForAddUnselectDelete changeTodo processchangeTodo() changeTodoSuccessSomeSelected readyForAddSelectUnselectDelete
readyForAddUnselectDelete deleteTodo processDeleteTodo() deleteTodoSuccessAllDeleted readyForAdd
readyForAddSelectUnselectDelete addTodo processAddTodo() addTodoSuccessSomeSelected readyForAddUnselectDelete
readyForAddSelectUnselectDelete changeTodo processChangeTodo() changeTodoSuccessAllSelected readyForAddUnselectDelete
readyForAddSelectUnselectDelete changeTodo processChangeTodo() changeTodoSuccessSomeSelected readyForAddSelectUnselectDelete
readyForAddSelectUnselectDelete changeTodo processChangeTodo() changeTodoSuccessNoneSelected readyForAddSelect
readyForAddSelectUnselectDelete changeTodo processChangeTodo() changeTodoSuccessSomeSelected readyForAddSelectUnselectDelete
readyForAddSelectUnselectDelete deleteTodo processDeleteTodo() deleteTodoSuccessNoneSelected readyForAddSelect

Note that I have identified four application states: readyForAdd, readyForAddSelect, readyForAddUnselectDelete, and readyForAddSelectUnselectDelete. The state readyForAdd, for instance, implies only add events can be emitted from this state, while the readyForAddSelect state can only emit add and select events, etc.

The steps for the UI development include:

  1. Set up an HTML layout file for the UI application identifying the locations for the custom elements to be backed by the web components.
  2. Add script tags to the HTML file to reference the web component files.
  3. Configure the states and events identified above in the application-specific JavaScript.
  4. Write code in the processor() functions to communicate with the web components via the corresponding custom elements.
  5. Add the state machine controller code.

1. To-Do application Web UI Layout Template

The web UI template corresponding to the above mock-up that will be using for our To-Do app is:

This template is not to be confused with the HTML Template feature of the Web Component specification. The above is just a UI layout for our application. Note that I have identified three custom element tags usage — input-comp, checkbox-group-comp and button-comp. The attributes used in these tags follow the APIs published by the corresponding web components. 

For our demo purposes, I am using the data-request and data-response pattern so we can send the data (JSON) to the web component at the data-request attribute and get a JSON response at the data-response attribute. Also note the script tags for the JavaScript files — input-comp.js, checkbox-group-comp.js and button-comp.js.

2. Web Components

The source for the three web components are:

input-comp.js

Note that for brevity of discussion, I am not using the Shadow DOM feature of web components.

checkbox-group-comp.js

The CheckboxGroupComp class handles three actions — create, delete, and update. After the action is performed, it writes back the items count and selected items of the checkbox group to the data-response attribute.

button-comp.js

3. Events and States Configuration

The events and states identified in the table above can be configured using JavaScript const objects.

The states are configured in todoApp.js like:

The appStates object sets the visibility status of the various components. So, it acts as a “View” in the MVC pattern.

The events are configured in todoApp.js like:

Note that I have used the nextState() function in the appEvents object instead of in the appStates object since we see that only an event knows what the next state should be. Also, note that the process() functions are used for the pre-events, and the nextState() functions are used for the post-events. This, again, follows directly from the state transitions table. 

4. Processor Functions

It is interesting to note that the process() functions communicate with the custom element tags by posting their JSON data to the data-request attribute and collect a JSON response at the data-response attribute. The processor functions read the JSON data stored in the data-response attributes using a utility object, called appData.

The source for appData (defined as a const in todoApp.js):

The above appData object acts as a “Model” for the MVC pattern.

5. Controller

The engine of our state machine consists of these simple controller functions (in todoApp.js):

The handleAppEvent() function listens and receives all the HTML DOM events (pre-events), including those raised by the web component. Note that even if Shadow DOM is used, we can still receive the events at the custom element (see Eric Bidelman). The callback function, handlePostEvent(), handles all post-events. 

Note that for brevity of discussion the CustomEvent that is created above is used merely as a data transfer object and routed to the appropriate process function via the stateTransitionsManager()  function. If these custom events are dispatched then all the process() functions should be updated to listen to these events.

How One Transition Works

When the user performs an action on the screen the following steps are triggered:

  1. HTML DOM event (pre-event) is captured in the handleAppEvent() function.
  2. This event is wrapped in a custom event and sent to the stateTransitionsManager().
  3. The stateTransitionsManager() uses the appEvents configuration and calls the required processor function passing it a callback function.
  4. The processor function communicates with the required web components and determines and creates a custom event (post-event) and passes it to the callback function, handlePostEvent().
  5. The handlePostEvent() function uses the appEvents configuration and calls the  nexState() function.
  6. The nextState() function uses the appStates configuration to set the visibility status of a web component.
  7. The screen is now ready to receive the next user action.

Demo

A demo of the application is available at TodoApp. As the user walks through each transition listed in the table above, they can also view the result of each step on the same page as a log message. 

Download

All the source for this article is available for download in GitHub.

Conclusions

A new approach to developing web UI applications using the state machine and MVC patterns is proposed. The approach is demonstrated for developing the TodoMVC application. Use of the Web Components is shown to further enhance the modular nature of the resulting application. 

A uniform mechanism to communicate with the web components via data-request and data-response custom element attributes is found to enable state transitions as per the design. The state transitions table is shown to serve as a requirements aid, a development aid, and as a test case aid.

Related Works:

Readers interested in exploring the use of web components for the To-Do app can checkout Polymer TodoMVC. Readers interested in using the Shadow DOM feature of the web components specification can checkout this video — Web Components: It’s about Time.



Source link

n unit, net foundation
Strategy

Top Selenium C# Automation Testing Frameworks For 2020


With the ever-increasing number of programming languages and frameworks, it’s quite easy to get lost and confused in this huge sea of all these frameworks. Popular languages like C# provide us with a lot of frameworks and it’s quite essential to know which particular framework would be best suited for your needs.  

This decision of choosing the best-suited Selenium C# framework can be a difficult task as a decision has to be made based on the project requirements, in-house expertise, and deadlines. Otherwise, you might get lost in the huge waves of questions such as do I look for a framework with test-driven development, it supports parallel testing, etc. 

Everyone has different requirements and thus needs a different solution. In this article we explore the top Selenium automation testing frameworks in C#, to help you find your perfect match to your automated browser testing requirements.

Note — Visual Studio 2019 (Community Edition) is the IDE that is used for development and any reference to the installation of test frameworks is concerning VS 2019.

NUnit

n unit, net foundation

NUnit is an open-source Selenium C# framework that is ported from JUnit. The latest version of NUnit is NUnit 3 which has a host of new features and supports a wide range of .NET platforms. This Selenium C# framework is widely preferred by C# developers for automated browser testing.

NUnit framework is user-extensible, follows a parameterized syntax (or annotation-based syntax), and primarily used for Test-Driven Development (TDD) with C#.

The supported platforms are .NET framework 3.5+, .NET standard 1.4+, and .Net Core.

How to Install the NUnit Framework

NUnit test framework can be downloaded from Nuget.org and at the time of writing this article, it had been downloaded more than 126 million times.

To install this Selenium C# framework, execute the following command on the Package Manager (PM) console of VS 2019.

PM> Install-Package NUnit -Version 3.12.0

You also have the option of installing the NUnit framework by using the GUI option of Package Manager (PM). You can refer to our detailed article on NUnit for Selenium automation testing for more information about the installation.

What Makes NUnit a Popular Selenium C# Framework?

NUnit is a preferred Selenium C# framework that is used for automated browser testing. Below are some of the advantages of using the NUnit framework:

  • It is a robust and user-extensible test framework.
  • Attributes are an important part of the NUnit framework and they are instrumental in speeding up the execution of the test cases.
  • The framework is well-suited if you are planning to use Test-Driven Development (TDD) for the test activity.
  • NUnit is open-source and the project is witnessing active participation on GitHub.
  • Along with automated browser testing, the NUnit framework can also be used for unit testing and acceptance testing using the Selenium framework.
  • Support for parallel test execution on a remote Selenium grid reduces the overall test execution time and accelerates the process of automated browser testing.
  • Good reporting tools are available that can be used with the NUnit framework.

Areas Where NUnit Framework Can Do Better!

There are way too many attributes in the NUnit framework and this often seems confusing. Some of the areas where the NUnit framework could have been better are below:

  • The class that contains the tests is under the [TestClass] attribute. Having tests confined under one attribute does not look like a robust approach.
  • Instead of having the implementation of test cases/test suites under a particular class, there should have been intelligence built in the test framework so that it could locate the test methods.
  • All the tests execute in the same fixture/class whereas the creation of a new instance of the test class for every test is a much better approach.
  • The possibility of one test causing other tests to fail is more when using the NUnit framework as all the tests are executed in the same class.

Is NUnit the Best Selenium C# Framework for You?

You should consider the NUnit framework for tasks related to Selenium automation testing and automated browser testing as it is a test framework that has existed for a long time.

Though this Selenium C# framework does not provide much isolation of tests, you could use it since it supports Parallel test execution on the local & remote Selenium grid.

XUnit

unit.net

xUnit.Net is another popular test framework in C# that is used for Selenium automation testing. ‘x’ in xUnit stands for the programming language for which the test framework is built i.e. JUnit for Java, NUnit for C#, etc. It is a Selenium C# framework that is built by the creators of the NUnit framework.

Instead of planning for incremental changes in the NUnit framework, the creators decided to build a new test framework that is more robust and is built around the community that uses it. Below are some of the reasons why xUnit.net was built:

xUnit.net

The latest version of the xUnit framework is 2.4.1. xUnit is more robust and extensible when compared to the NUnit framework.

How to Install the XUnit Framework

For installing the xUnit framework and other dependent packages, you have to execute the ‘Install Package’ command on the Package Manager console.

You also have the option of installing the packages using the Package Manager GUI. To open the NuGet Package Manager, go to ‘Tools’ -> ‘NuGet Package Manager’ -> ‘Manage NuGet Packages for Solution’. Search for the following packages and install each of them:

  • xUnit
  • xUnit.runner.visualstudio
  • Microsoft.NET.Test.Sdk

What Makes XUnit a Popular Selenium C# Framework?

The xUnit framework is built around the community. Hence, the majority of the shortcomings of the NUnit framework are not carried forward while designing the xUnit framework. Below are some of the reasons why this Selenium C# framework is gaining popularity:

  • The framework follows a unique style of testing. Tags like [Test] and [TestFixture] which were an integral part of the NUnit framework are not included in xUnit framework.
  • Intelligence is built in the framework as test cases & test suites are not restricted to any particular attribute. The framework is intelligent enough to identify the test methods, irrespective of the location of the methods.
  • xUnit provides better test isolation as a new test class is instantiated for each test case. Once the test case execution is complete, the test class is discarded. This reduces the dependency between different test cases and also minimizes/nullifies the possibility of one test causing the other test case to fail!
  • It is more user-extensible when compared to the other popular C# frameworks used for Selenium automation testing.
  • [Setup] and [TearDown] annotations that normally included the initialization and de-initialization related implementation for the test-cases are not a part of the xUnit framework. This avoids code duplication and makes the code flow more understandable.
  • The framework encourages developers & testers to come up with well-designed tests as it makes use of a constructor of the test class for initialization and IDisposable interface for de-initialization.
  • This Selenium C# framework has a lesser number of annotations when compared to other C# frameworks. Also, more modern tags like [Fact] and [Theory] ease the process of test case creation, whether the test cases are parameterized or non-parameterized.
  • There is the usage of Assert. Throws instead of [ExpectedException] which is much better in handling generic asserts.
  • Parallel test execution using a Selenium Grid which is an essential part of automated browser testing can be achieved at the thread level in xUnit.
  • It can be used for data-driven testing.

Areas Where xUnit Framework Can Do Better!

The xUnit framework has been well-received by the developer & test community. As xUnit is a Selenium C# framework created using intuitive terminology, it is preferred by the folks looking for test frameworks that enable automated browser testing.

Documentation is the only area where the xUnit framework needs some improvement. It is good to see that it is improving over time.

Is xUnit the Best Selenium C# Framework for You?

You should choose the xUnit framework for your project if you are looking at Selenium C# frameworks that are less confusing (with fewer attributes), well-designed, and more user-extensible.

There would be a learning curve involved in porting the test code that makes use of the NUnit/MSTest framework to xUnit as it would require a detailed understanding of the framework. The developers of xUnit have a reputation for commitment & evangelism and this is evident from the design of the xUnit framework which keeps the community at the forefront.

If you are looking for a modern C# test framework for automated browser testing, you should give xUnit a spin!

Golem

Golem

Golem is an open-source, object-oriented C# test framework that is available on GitHub as ProtoTest.Golem. Golem was used as an internal tool at ProtoTest and the good part is that the tool had already been used by several ProtoTest’s clients before it was made open-source.

As mentioned by the author Brian Kitchener, Golem is an all in one test automation for anyone who is working in the .NET environment. The tests in Golem are written in Visual Studio. Initially, MbUnit was used for test case development and Gallio for execution but the version 2.2.1 of Golem internally makes use of the NUnit framework.

How to Install the Golem Framework

The latest version of Golem is 2.2.1. Golem can be installed by executing the package manager command ‘Install-Package <package-name>’ on the PM console.

PM> Install-Package Golem -Version 2.2.1 

Alternately, you also have the option of installing this C# test framework using the Package Manager GUI option in VS 2019. The NuGet package for Golem can be downloaded from here.

What Makes Golem a Popular Selenium C# Framework?

The biggest upside of the Golem framework is that it was extensively used internally at ProtoTest before the code was open-sourced. Below are some of the positives of the Golem test framework:

  • The Golem framework is open-source, simple, well-designed, and uses object-oriented APIs.
  • It supports several test automation tools like Selenium WebDriver, Appium, UIAutomation from Microsoft, and can also be used to test REST services & validate HTTP traffic.
  • The framework is much more than the Selenium C# framework as it supports different test automation tools.
  • Tests are written using industry-standard, ‘page object,’ design pattern
  • It makes building clean, robust, reusable and simplifies the process of scale-able automation.
  • It has a robust reporting and logging mechanism which can be very useful for isolation of issues.
  • It supports data-driven testing as well as parallel test execution using Selenium Grid.

Areas Where the Golem Framework Can Do Better!

As Golem supports multiple test automation tools, it can be used for test automation including automated browser testing and Selenium automation testing. Though there is several advantages of using Golem for test automation, their number of shortcomings:

  • The last update to Golem was on 11/3/2016 due to which the framework is not able to gain much traction.
  • To date, 2.2.1 version of Golem has been downloaded only 665 times. This means that Golem’s latest version is not in use by many developers/enterprises.

Is Golem the Best Selenium C# Framework for You?

The biggest advantage of the Golem framework is that it supports several test automation tools, including Selenium WebDriver. Hence, it can be used for Selenium automation testing.

Though the framework was built keeping simplicity and reusability in mind, minimal framework updates can dampen the growth of Golem. Since there has not been much activity on Golem’s Support Group, test automation development using Golem can come to a standstill if you encounter issues when using the framework.

You should choose the Golem framework over other test automation tools only if the goal is to look for test automation frameworks that provide support for multiple automation tools.

Bumblebee

Bumblebee

Bumblebee is a Selenium browser test automation framework that can be used for standardized creation of page objects. It can also be used for dynamic web pages. Bumblebee is a .NET layer that is built on top of the Selenium browser automation framework. The latest version of Bumblebee is Bumble 2.0 i.e. 2.1.2.

Bumblebee was built in a manner that each page can be broken down into multiple Blocks and Elements. The classes provided by Bumblebee to model your website into page objects that can be consumed into the automation code. If page objects are well designed, writing the automation code in Bumblebee should be an effortless task.

How to Install Bumblebee Framework

To install Bumblebee on Visual Studio, create a new project of type ‘Class Library’. Like other Selenium C# frameworks, Bumblebee should also be installed using the Package Manager in VS 2019. Execute the following command on the Package Manager (PM) console:

 PM> Install-Package Bumblebee.Automation 

Alternatively, Bumblebee can be installed by downloading Bumblebee’s Nuget package. Go to ‘Tools’ -> ‘NuGet Package Manager’ -> ‘Manage NuGet Packages for Solution’. Search for ‘Bumblebee. Automation’ and click Install for installing the Bumblebee framework.

What Makes Bumblebee a Popular Selenium C# Framework?

The best part about the Bumblebee framework is the design. Bumblebee standardizes the design of page objects and makes the automation scripting easier. Below are some of the core reasons why you should use Bumblebee for activities related to automated browser testing:

  • Like other page object models, Bumblebee divides testing into two parts. The page objects to model the subject of the testing, and the automation uses the page objects to tell the browser what to do.
  • Page objects can be developed at a quicker pace by modeling which parts of the page can be interacted with.
  • The automation are driven by IntelliSense.
  • There is an intense focus on usability as it makes use of standardized UI interfaces.
  • This Selenium C# framework is flexible, supports parallelization, and also provides test framework independence.
  • As each page is broken down into Blocks and Elements, writing test automation code is a much simpler task with Bumblebee.
  • Each browser session is instantiated with a driver environment, specifying how to create the driver for that particular session. Thus you can have multiple environments, such as a local environment (running on your local machine) and a grid environment (running on some remote selenium grid). Hence, Bumblebee can be used for automated browser testing on local as well as remote Selenium grid.
  • There is extensive documentation on this Selenium C# framework that can aid developers with automated browser testing.
  • Test cases designed using Bumblebee are extremely flexible and takes some of the burdens off of designing blocks for complicated sites.
  • There is a separate library for adding Kendo element support for the Bumblebee test framework.

Areas Where the Bumblebee Framework Can Do Better!

Bumblebee has advantages of a well-designed test framework that makes excellent use of the Page Object Model (POM). This eases the job of test creation and test maintenance.

Though the Bumblebee framework is updated regularly, the documentation seems a bit outdated as it does not contain examples with the xUnit framework which is gaining traction for Selenium automation testing. Also, there are no code samples/snippets demonstrating usage of Bumblebee for parallel test execution, using Selenium Grid,  which is a critical aspect in automated browser testing. 

Is Bumblebee the Best Selenium C# Framework for You?

The Bumblebee framework has all the right things in place as far as automated browser testing is concerned. The learning curve for getting used to the Bumblebee framework will not be much as it is a .NET layer on top of the Selenium browser automation framework.

If your team is looking for a Selenium C# framework that is built on good design principles, follows Page Object Modelling (POM), supports parallel test execution, and is test framework independent; then you should give Bumblebee a try.

Atata

Atata

Atata framework is an open-source C#/.NET web UI test automation framework that is built on Selenium WebDriver. It uses the Page Object Pattern for development. This framework supports .NET Framework 4.0+ and .NET Core/Standard 2.0+. The latest version of Atata is 1.4.0. The project is hosted on GitHub under the Apache License 2.0.

This Selenium C# framework consists of the following concepts:

  • Components (controls and page objects)
  • Attributes of the control search
  • Settings attributes
  • Triggers
  • Verification attributes and methods

There are two ways in which Atata based project can be created in VS 2019:

  • Project templates — The supported templates are Atata NUnit Test Project (.NET Framework), Atata NUnit Test Project (.NET Core), Atata Components Library (.NET Framework), and Atata Components Library (.NET Standard).
  • Item templates — The supported item templates are Atata Page Object, Atata Base Page Object, Atata Control, Atata Trigger, Atata NUnit Test Fixture, and Atata NUnit Base Test Fixture. 

How to Install Atata Framework

Once a new Atata project is created, the Atata package has to be installed. Along with the base Atata package, dependent packages of Atata are automatically installed. For installing the Atata Package, the following commands have to be executed on the Page Manager Console:

The same packages can be downloaded from NuGet Gallery and installed using the Package Manager GUI in VS 2019. 

Atata.Bootstrap package is the C#/.NET package containing a set of Atata components for automated web testing/automated browser testing integration with the Bootstrap Framework. 

Atata.Configuration.Json is a C#/.NET package for Atata configuration through JSON files. 

Atata.KendoUI is a C#/.NET package containing a set of Atata components for automated web testing integration with the Kendo UI HTML Framework.

What Makes Atata a Popular Selenium C# Framework?

Atata is a relatively new Selenium C# framework but the overall design based on Page Object Model (POM) and flexibility to use with different .NET engines are the points that make Atata a powerful framework.

Below are some of the key factors that make Atata a framework to watch out for in 2020:cross browser testing

  • Atata is based on the Selenium WebDriver and preserves all the features of Selenium WebDriver. As the Selenium framework is widely used for automated browser testing/cross-browser testing, the learning curve involved to learn the framework will be steep.
  • As Atata is based on POM, the creation, and maintenance of test cases/test suites is not difficult.
  • Atata can be used with major .NET test frameworks i.e. NUnit, xUnit, SpecFlow, etc.
  • Atata works well with Continuous Integration (CI) systems like Jenkins, Azure, DevOps, TeamCity, etc.
  • It has built-in reporting logging and screenshots capturing functionality.
  • It has more powerful assertion methods and triggers that are useful for component and data verification.
  • There are a powerful set of components (inputs, tables, lists, etc.) built-in the Atata framework. Though these components might not be useful for Selenium automation testing, they would be used for data-driven testing.
  • There is a feature for multi-browser configurations via fixture arguments.
  • This Selenium C# framework is user-extensible. Atata.Bootstrap and Atata.KendoUI packages have a set of ready to use components.

The Atata framework has the right set of powerful features that are useful for automated browser testing. The framework is updated regularly with v1.4.1 released in Q4, 2019.

Areas Where the Atata Framework Can Do Better!

Though Atata is not a new test framework, the development of the framework picked up pace in 2019. We have to wait for more adoption before commenting on the improvements to the framework. On the whole, it looks like a promising automated browser testing framework as it can be used with popular C# frameworks.

The team behind the development of the Atata framework has to ensure that the community is updated about the development of the framework else it might lose traction!

Is Atata the Best Selenium C# Framework for You?

The biggest plus point of using the Atata framework is that it can be used with popular test frameworks like NUnit, xUnit, SpecFlow, etc. Since this Selenium C# framework can be used with SpecFlow, it is instrumental for Business-Driven Development (BDD).

It is easy to get started with the Atata framework and the examples section on the Atata site provides several examples demonstrating different test scenarios. Though the first version of Atata was released in 2016, the development gained pace in 2019 which makes it a Selenium C# framework to watch out for in 2020.

Gauge

Gauge

Gauge is another popular test automation framework that is used to create readable and maintainable tests using C#. Gauge is open-source and the framework is created by ThoughtWorks Inc. and developers/creators behind the Selenium. The latest version of Gauge for C# is 0.10.6.

It also supports other programming languages including GoLang. Gauge is a preferred Selenium C# framework used for the development of BDD (Behavior Driven Development) and ATDD (Acceptance Test Driven Development). 

The Gherkin language is used for the creation of feature files and spec files are created using markdown language. As tests are in Markdown, it reduces the effort for test code creation and maintenance.

How to Install Gauge Framework

To install this Selenium C# framework, you have to visit, Installing Gauge page on the official website of Gauge. Once you are on the page, select the target OS (operating system), programming language, and IDE/Editor.

In our case, we installed the Gauge framework on Windows 10 with the target language as C# and IDE as Visual Studio (Download link).

Visual studio

Once the installation of the Gauge framework is complete, you need to install the Gauge plugin for Visual Studio. As per the notice, the Gauge team will officially end support for the Gauge plugin for Visual Studio in October 2020.

What Makes Gauge a Popular Selenium C# Framework?

Gauge is used for automated browser testing due to several reasons; few of them are listed below:

  • Availability of several templates in the programming language of your choice helps you kick-start the test automation project
  • Command-line tools that make integration with CI/CD tools much easier
  • Availability of different language runners – C# runner, C# runner (.NET Core), Java runner, Ruby runner, JavaScript runner, Python runner, and GoLang runner (Link). This makes it usable across a wide range of programming languages
  • Flexibility to create your plugin using Gauge plugin API
  • Integration with cloud-based cross-browser testing tools like LambdaTest
  • Creation of scalable test cases/test suites through parallel test execution
  • Focus on reporting in different reporting formats (HTML, XML, Flash, etc.) that help in locating and isolating issues in test cases
  • Integration with build management tools like Maven, Cradle, etc.
  • Well suited for BDD and ATDD that uses Gherkin for creation of spec files and feature files
  • Excellent documentation to help developers get started with Gauge framework

As the Gauge framework supports parallel test execution, it is widely preferred for automated browser testing on a remote Selenium Grid. 

Areas Where the Gauge Framework Can Do Better!

There are frequent updates to the Gauge framework which is very encouraging to the developer community that uses Gauge for automated browser testing.

Many C# developers use Visual Studio as the default IDE for development and testing. Deprecating the Visual Studio Gauge Plugin in favor of Gauge Visual Studio Code Plugin could cause problems to developers who use VS 2019 (and not Visual Studio Code). Below is the snapshot of the announcement from the GitHub repository of Gauge:

depreciation notice

Is Gauge the Best Selenium C# Framework for You?

You should choose Gauge over test automation frameworks if you are looking for a Selenium framework that can be used with different programming languages. If your team is well-versed with Gherkin, you can choose the Gauge framework as it is used for BDD, the framework is updated regularly, and has a growing community. 

MSTest

MSTest

MSTest, also referred to as Visual Studio Unit Testing Framework is the default test framework that is shipped along with Visual Studio. The latest version of MSTest is MSTest V2. The framework identifies tests via annotations/attributes under which the implementation is present. The primary responsibility of annotations in MSTest is to inform the underlying framework about the interpretation of the source code.

MSTest V2 is open-source and has much more powerful features than its predecessor. Project migration from MSTest V1 to MSTest V2 does not require much effort as the entire porting process is seamless. MSTest V2 is hosted on GitHub and the repositories are available at Microsoft/testfx: MSTest V2 framework and adapter and Microsoft/testfx-docs: Docs for MSTest V2 test framework.

As MSTest V2 is a community-focused Selenium C# framework, it is gaining acceptance and popularity for tests related to automated browser testing.

How to Install the MSTest Framework

Libraries/Packages related to this Selenium C# framework can be installed using Package Manager Console in Visual Studio. In VS 2019, go to Tools’ -> ‘NuGet Package Manager’ -> ‘Package Manager Console’ and execute the following on the PM Console

Packages for the MSTest framework i.e. MSTest.TestAdapter, MSTest.TestFramework and Microsoft.NET.Test.Sdk can also be installed by using the NuGet Package Manager. It is accessed by going to ‘Tools’ -> ‘NuGet Package Manager’ -> ‘Manage NuGet Packages for Solution’.

What Makes MSTest a Popular Selenium C# Framework?

MSTest V2 is widely accepted by the test community which is into Selenium automation testing. Below are some of the salient features of MSTest V2 which makes it a popular framework for automated browser testing:

  • Open-source and community-driven focus are the key factors that work in favor of MSTest V2.
  • V2 version of MSTest framework comes with cross-platform support i.e. developers can write tests for the .NET framework, .NET Core, and ASP.NET Core on varied platforms like Windows, Mac, and Linux.
  • Data-driven testing is possible with the MSTest framework. With a data-driven approach, developers can come up with methods that can be executed several times using different input arguments.
  • Using in-assembly parallelism features, multiple tests can be executed in parallel thereby reducing the overall execution time.
  • The framework is extensible as developers can come up with custom attributes and custom assets using MSTest.

As MSTest framework comes pre-bundled with Visual Studio, many developers prefer using the framework for Selenium automation testing. V2 version of MSTest is much more developer-friendly which is also one of the reasons for the growing popularity of this Selenium C# framework.

Areas Where MSTest Framework Can Do Better!

MSTest V2 is comparable with other popular Selenium C# frameworks like xUnit and NUnit. However, xUnit has an upper edge as it has less number of annotations which leads to less confusion.

MSTest does not provide adequate test isolation as a new test class is not instantiated for each test case. These can be considered as the strong points of other test frameworks like xUnit.net rather than the shortcomings of MSTest.

Is MSTest the Best Selenium C# Framework for You?

MSTest V2 comes with an interesting set of features like parallel test execution using Selenium Grid, extensibility, community focus, data-driven testing, and much more.

The MSTest framework can also be used without Visual Studio, all you need is a command-line tool named MSTest.exe. Depending on the complexity of the project and skill-set of the team members, you have to choose between MSTest, xUnit, and NUnit framework for the project as each of these Selenium C# frameworks have their advantages and disadvantages.

SpecFlow

specflow

SpecFlow is another popular C# automation framework that is used for BDD and ATDD development. It also makes use of the Gherkin language for the creation of features and scenarios. It is open-source and the source-code of SpecFlow is available here.

SpecFlow is a part of the Cucumber family and supports .NET, Xamarin, and Mono frameworks. The latest version of SpecFlow is SpecFlow 3 (i.e. 3.0.225). It can be used with popular Selenium C# frameworks like NUnit, xUnit, and MSTest.

How to Install the SpecFlow Framework

Before installing the packages for SpecFlow, the SpecFlow integration for VS 2019 has to be installed. For installing the plug-in, download the SpecFlowForVisualStudio plug-in from the Visual Studio Marketplace and install it by double-clicking on the same. You could also install the plug-in from the Visual Studio IDE by searching for SpecFlow for Visual Studio 2019 in the Online extensions.

Manage Extension

SpecFlow and other dependent packages can now be installed by executing the following commands on the Package Manager (PM) console:

As shown above, we are making use of the NUnit framework for the development of test cases hence the same is installed along with the SpecFlow framework.

What Makes Specflow a Popular Selenium C# Framework?

SpecFlow is used not for traditional test case development but for the development of BDD and ATDD. Below are some of the primary reasons for using the SpecFlow framework:

  • SpecFlow uses the Outside-In approach for the development of acceptance tests which is designed based on business behavior rather than technical specifications.
  • As Gherkin being a ubiquitous language, effective test cases using Gherkin can be developed by technical, as well as non-technical personnel.
  • Test cases developed using SpecFlow and Gherkin are more modular & maintainable as changes (in Gherkin) are involved in the scenario file and corresponding changes are required in the implementation.
  • Parallel test execution can be performed using SpecFlow by combining the parallelism capability of the NUnit framework with SpecFlow’s dependency injection.

SpecFlow 3 is more widely used when compared to other Selenium C# frameworks that aid ADD and BDD as it is open source and can be used with other popular test automation frameworks. 

Areas Where SpecFlow Framework Can do Better!

Parallel test execution becomes extremely critical in scenarios related to cross-browser testing. The documentation on parallel execution with NUnit and SpecFlow is not very comprehensive and requires an update.

Is SpecFlow the best Selenium C# Framework for you?

Your project should consider SpecFlow if the intention is to develop BDD and ATDD tests. Doing so will also provide an opportunity for others (non-technical folks) in the team to work on test scenarios from an end-user’s perspective.

Bottom Line

Choosing an ideal Selenium C# test framework should be based on the team’s expertise and prior experience with the framework. 

Many test teams are making use of BDD and ATDD as it improves collaboration between the team members and also brings an extra angle on automation testing. In such scenarios, you should choose the best-suited test automation framework for performing the job.

All the Selenium C# test frameworks support parallel test execution but you might encounter issues to get started with it. Hence, look at the community of the test framework before making a final choice☺.

Do you agree with our picks? Let us know, which is your favorite C# test automation framework. Pick your favorite and tell us the reason why, in the comment section below!

Happy Testing!



Source link

Unit Testing Xamarin Forms View Model
Strategy

Unit Testing Xamarin Forms View Model


In this tutorial, we are going to see how to unit test a view model in a Xamarin Forms application.

View Model in a Nutshell

The view model is the centerpiece of the Model-View-ViewModel pattern. When you’re not using MVVM, your presentation logic is in the view (code behind), making harder to test because the logic is coupled to UI elements. On the other hand, when using MVVM, your presentation logic move into the view model, by decoupling logic from the view, you gain the ability to test logic without being bothered by UI elements.

Getting Started

To walk you through this tutorial, I created a Xamarin Forms app called WorkoutTube.

WorkoutTube allows users to

  • View a list of workout videos from Youtube.
  • Launch the Youtube app with the selected video, if the Youtube app is not installed, it will launch the browser version.

The project can be downloaded here.

Project Overview

The project is already implemented, we will focus on unit tests only.

In order to run the project, edit the file AppConfiguration.cs, under the WorkoutTube project, update the property YoutubeApiKey with your own, you can request Youtube API key here.

Tools

To make testing easier, the unit tests project WorkoutTube.UnitTests is using the following packages :

  • Autofac: IoC container.
  • AutoMock: allows you to automatically create mock dependencies
  • AutoFixture: library for .NET designed to minimize the ‘Arrange’ phase of your unit tests
  • FluentAssertions: Fluent API for asserting the results of unit tests

Making the View Model Testable

The HomePageViewModel holds dependencies to other services, instead of relying on the concrete implementation of those services, it relies on their abstractions.

Concrete implementation will be injected through an Ioc container.

Having the ability to inject dependencies is great because when testing we can supply mock objects instead of real ones.

Dependency Injection

The WorkoutTube project uses AutoFac as Ioc container, dependencies are registered in App.xaml.cs.

Writing Unit Tests

For our tests, the HomePageViewModel will be the subject under the test, we will focus on two types of testing :

  • State: checking how the view model’s property values are affected by certain actions.
  • Interactions: checking that the view model’s dependencies service’s methods are called properly.

Testing States

The HomePageViewModel exposes properties and commands to which view HomePage can bind to.

Let’s write a test that checks that when the HomePageViewModel is created, Videos property is empty.

We can also write a test to check that when HomePageViewModel is created, SelectedVideo property is null, meaning there is no video selected yet.

How about checking if the OpenVideoCommand is initialized properly.

The HomePageViewModel has a method called Initialize, which is called after the view model is created, in this method, the VideoService fetch videos from Youtube API.

Let’s write a test to check the following scenario: after a successful initialization, Videos property should be updated with data retrieved from the service.

Note: notice that we are configuring IVideoService as a mock object, that way we’re not communicating with the Youtube API, unit testing is not about calling real API, that’s the job of integration testing.

When testing, it is important to not only test the happy path but also the sad path (when things go wrong). In HomePageViewModel, the code that loads the videos is wrapped around a try/catch block, when we catch an exception we display the message by using IDialogService.

Let’s write a test for the sad path.

Testing Interactions

The HomePageViewModel exposes a command called OpenVideoCommand, when executed, it launches the Youtube app through a service called YoutubeServiceLauncher.

Let’s write a test that checks that the method OpenAsync of YoutubeServiceLauncher has been called.

 

We can also write the test for the sad path.

Conclusion

When using MVVM, it is important to unit tests your view models, testing will increase the overall quality of your application.

I hope you enjoy this tutorial, the source code is available for download here. feel free to use it.



Source link

Showing console output of the various hooks and objects that are logged by Formik.
Strategy

Using Formik to Handle Forms in React


There is no doubt that web forms play an integral role in our web site or applications. By default, they provide a useful set of elements and features — from legends and fieldsets to native validation and states — but they only get us so far when we start to consider the peculiarities of using them. For example, how can we manipulate the state of a form? How about different forms of validation? Even hooking a form up to post submissions is a daunting effort at times.

Component-driven front-end libraries, like React, can ease the task of wiring web forms but can also get verbose and redundant. That’s why I want to introduce you to Formik, a small library that solves the three most annoying parts of writing forms in React:

  1. State manipulation
  2. Form validation (and error messages)
  3. Form submission

We’re going to build a form together in this post. We’ll start with a React component then integrate Formik while demonstrating the way it handles state, validation, and submissions.

Creating a form as a React component

Components live and breathe through their state and prop. What HTML form elements have in common with React components is that they naturally keep some internal state. Their values are also automatically stored in their value attribute.

Allowing form elements to manage their own state in React makes them uncontrolled components. That’s just a fancy way of saying the DOM handles the state instead of React. And while that works, it is often easier to use controlled components, where React handles the state and serves as the single source of truth rather than the DOM.

The markup for a straightforward HTML form might look something like this:

<form>
  <div className="formRow">
    <label htmlFor="email">Email address</label>
    <input type="email" name="email" className="email" />
  </div>
  <div className="formRow">
    <label htmlFor="password">Password</label>
    <input type="password" name="password" className="password" />
  </div>
  <button type="submit">Submit</button>
</form>

We can convert that into a controlled React component like so:

function HTMLForm() {
  const [email, setEmail] = React.useState("");
  const [password, setPassword] = React.useState("");


  return (
    <form>
      <div className="formRow">
        <label htmlFor="email">Email address</label>
        <input
          type="email"
          name="email"
          className="email"
          value={email}
          onChange={e => setEmail(e.target.value)}
        />
      </div>
      <div className="formRow">
        <label htmlFor="password">Password</label>
        <input
          type="password"
          name="password"
          className="password"
          value={password}
          onChange={e => setPassword(e.target.value)}
        />
      </div>
      <button type="submit">Submit</button>
    </form>
  );
}

This is a bit verbose but it comes with some benefits:

  1. We get a single source of truth for form values in the state.
  2. We can validate the form when and how we want.
  3. We get performance perks by loading what we need and when we need it.

OK, so why Formik again?

As it is with anything JavaScript, there’s already a bevy of form management libraries out there, like React Hook Form and Redux Form, that we can use. But there are several things that make Formik stand out from the pack:

  1. It’s declarative: Formik eliminates redundancy through abstraction and taking responsibility for state, validation and submissions.
  2. It offers an Escape Hatch: Abstraction is good, but forms are peculiar to certain patterns. Formik abstracts for you but also let’s you control it should you need to.
  3. It co-locates form states: Formik keeps everything that has to do with your form within your form components.
  4. It’s adaptable: Formik doesn’t enforce any rules on you. You can use as less or as much Formik as you need.
  5. Easy to use: Formik just works.

Sound good? Let’s implement Formik into our form component.

Going Formik

We will be building a basic login form to get our beaks wet with the fundamentals. We’ll be touching on three different ways to work with Formik:

  1. Using the useFormik hook
  2. Using Formik with React context
  3. Using withFormik as a higher-order component

I’ve created a demo with the packages we need, Formik and Yup.

Method 1: Using the useFormik hook

As it is right now, our form does nothing tangible. To start using Formik, we need to import the useFormik hook. When we use the hook, it returns all of the Formik functions and variables that help us manage the form. If we were to log the returned values to the console, we get this:

Showing console output of the various hooks and objects that are logged by Formik.

We’ll call useFormik and pass it initialValues to start. Then, an onSubmit handler fires when a form submission happens. Here’s how that looks:

// This is a React component
function BaseFormik() {
  const formik = useFormik({
    initialValues: {
      email: "",
      password: ""
    },
    onSubmit(values) {
      // This will run when the form is submitted
    }
  });
  
 // If you're curious, you can run this Effect
 //  useEffect(() => {
 //   console.log({formik});
 // }, [])


  return (
    // Your actual form
  )
}

Then we’ll bind Formik to our form elements:

// This is a React component
function BaseFormik() {
  const formik = useFormik({
    initialValues: {
      email: "",
      password: ""
    },
    onSubmit(values) {
      // This will run when the form is submitted
    }
  });
  
 // If you're curious, you can run this Effect
 //  useEffect(() => {
 //   console.log({formik});
 // }, [])


  return (
  // We bind "onSubmit" to "formik.handleSubmit"
  <form className="baseForm" onSubmit={formik.handleSubmit} noValidate>
    <input
      type="email"
      name="email"
      id="email"
      className="email formField"
      value={formik.values.email} // We also bind our email value
      onChange={formik.handleChange} // And, we bind our "onChange" event.
    />
  </form>
  )
}

This is how the binding works:

  1. It handles form submission with onSubmit={formik.handleSubmit}.
  2. It handles the state of inputs with value={formik.values.email} and onChange={formik.handleChange}.

If you take a closer look, we didn’t have to set up our state, nor handle the onChange or onSubmit events as we’d typically do with React. The complete change to our form goes:

However as you might have noticed, our form contains some redundancy. We had to drill down formik and manually bind the form input’s value and onChange event. That means we should de-structure the returned value and immediately bind the necessary props to a dependent field, like this:

// This is a React component
function BaseFormik() {
  const {getFieldProps, handleSubmit} = useFormik({
    initialValues: {
      email: "",
      password: ""
    },
    onSubmit(values) {
      // This will run when the form is submitted
    }
  });
  
 // If you're curious, you can run this Effect
 //  useEffect(() => {
 //   console.log({formik});
 // }, [])


  return (
  <form className="baseForm" onSubmit={handleSubmit} noValidate>
    <input
      type="email"
      id="email"
      className="email formField"
      {...getFieldProps("email")} // We pass the name of the dependent field
    />
  </form>
  )
}

Let’s take things even further with the included <Formik/>  component.

Method 2: Using Formik with React context

The <Formik/> component exposes various other components that adds more abstraction and sensible defaults. For example, components like <Form/>, <Field/>, and <ErrorMessage/> are ready to go right out of the box.

Keep in mind, you don’t have to use these components when working with <Formik/> but they do require <Formik/> (or withFormik) when using them.

Using <Formik/> requires an overhaul because it uses the render props pattern as opposed to hooks with useFormik. The render props pattern isn’t something new in React. It is a pattern that enables code re-usability between components — something hooks solve better. Nevertheless, <Formik/> has a bagful of custom components that make working with forms much easier.

import { Formik } from "formik";


function FormikRenderProps() {
  const initialValues = {
    email: "",
    password: ""
  };
  function onSubmit(values) {
    // Do stuff here...
    alert(JSON.stringify(values, null, 2));
  }
  return (
      <Formik {...{ initialValues, onSubmit }}>
        {({ getFieldProps, handleSubmit }) => (
            <form className="baseForm" onSubmit={handleSubmit} noValidate>
              <input
                type="email"
                id="email"
                className="email formField"
                {...getFieldProps("email")}
              />
            </form>
        )}
      </Formik>
  );
}

Notice that initialValues and onSubmit have been completely detached from useFormik. This means we are able to pass the props that <Formik/> needs, specifically initialValues and useFormik.

<Formik/> returns a value that’s been de-structured into getFieldProps and handleSubmit. Everything else basically remains the same as the first method using useFormik.

Here’s a refresher on React render props if you’re feeling a little rusty.

We haven’t actually put any <Formik/> components to use just yet. I’ve done this intentionally to demonstrate Formik’s adaptability. We certainly do want to use those components for our form fields, so let’s rewrite the component so it uses the <Form/> component.

import { Formik, Field, Form } from "formik";


function FormikRenderProps() {
  const initialValues = {
    email: "",
    password: ""
  };
  function onSubmit(values) {
    // Do stuff here...
    alert(JSON.stringify(values, null, 2));
  }
  return (
      <Formik {...{ initialValues, onSubmit }}>
        {() => (
            <Form className="baseForm" noValidate>
              <Field
                type="email"
                id="email"
                className="email formField"
                name="email"
              />
            </Form>
        )}
      </Formik>
  );
}

We replaced <form/> with <Form/> and removed the onSubmit handler since Formik handles that for us. Remember, it takes on all the responsibilities for handling forms.

We also replaced <input/> with <Field/> and removed the bindings. Again, Formik handles that.

There’s also no need to bother with the returned value from <Formik/> anymore. You guessed it, Formik handles that as well.

Formik handles everything for us. We can now focus more on the business logic of our forms rather than things that can essentially be abstracted.

We’re pretty much set to go and guess what? We’ve haven’t been concerned with state managements or form submissions!

“What about validation?” you may ask. We haven’t touched on that because it’s a whole new level on its own. Let’s touch on that before jumping to the last method.

Form validation with Formik

If you’ve ever worked with forms (and I bet you have), then you’re aware that validation isn’t something to neglect.

We want to take control of when and how to validate so new opportunities open up to create better user experiences. Gmail, for example, will not let you input a password unless the email address input is validated and authenticated. We could also do something where we validate on the spot and display messaging without additional interactions or page refreshes.

Here are three ways that Formik is able to handle validation:

  1. At the form level
  2. At the field level
  3. With manual triggers

Validation at the form level means validating the form as a whole. Since we have immediate access to form values, we can validate the entire form at once by either:

Both validate and validationSchema are functions that return an errors object with key/value pairings that those of initialValues. We can pass those to  useFormik, <Formik/> or withFormik

While validate is used for custom validations, validationSchema is used with a third-party library like Yup. 

Here’s an example using validate:

// Pass the `onSubmit` function that gets called when the form is submitted.
const formik = useFormik({
  initialValues: {
    email: "",
    password: ""
  },
  // We've added a validate function
  validate() {
    const errors = {};
    // Add the touched to avoid the validator validating all fields at once
    if (formik.touched.email && !formik.values.email) {
      errors.email = "Required";
    } else if (
      !/^[A-Z0-9._%+-][email protected][A-Z0-9.-]+.[A-Z]{2,4}$/i.test(formik.values.email)
    ) {
      errors.email = "Invalid email address";
    }
    if (formik.touched.password && !formik.values.password) {
      errors.password = "Required";
    } else if (formik.values.password.length <= 8) {
      errors.password = "Must be more than 8 characters";
    }
    return errors;
  },
  onSubmit(values) {
    // Do stuff here...
  }
});
// ...

And here we go with an example using validationSchema instead:

const formik = useFormik({
  initialValues: {
    email: "",
    password: ""
  },
  // We used Yup here.
  validationSchema: Yup.object().shape({
    email: Yup.string()
      .email("Invalid email address")
      .required("Required"),
    password: Yup.string()
      .min(8, "Must be more than 8 characters")
      .required("Required")
  }),
  onSubmit(values) {
    // Do stuff here...
  }
});

Validating at the field level or using manual triggers are fairly simple to understand. Albeit, you’ll likely use form level validation most of the time. It’s also worth checking out the docs to see other use cases.

Method 3: Using withFormik as a higher-order component

withFormik is a higher-order component and be used that way if that’s your thing. Write the form, then expose it through Formik.

A couple of practical examples

So far, we’ve become acquainted with Formik, covered the benefits of using it for creating forms in React, and covered a few methods to implement it as a React component while demonstrating various ways we can use it for validation. What we haven’t done is looked at examples of those key concepts.

So, let’s look at a couple of practical applications: displaying error messages and generating a username based on what’s entered in the email input.

Displaying error messages

We’ve built our form and validated it. And we’ve caught some errors that can be found in our errors object. But it’s no use if we aren’t actually displaying those errors.

Formik makes this a pretty trivial task. All we need to do is check the errors object returned by any of the methods we’ve looked at — <Formik/>, useFormik or withFormik — and display them:

<label className="formFieldLabel" htmlFor="email">
  Email address
  <span className="errorMessage">
    {touched["email"] && errors["email"]}
  </span>
</label>
<div className="formFieldWrapInner">
  <input
    type="email"
    id="email"
    className="email formField"
    {...getFieldProps("email")}
  />
</div>

If there’s an error during validation, {touched["email"] && errors["email"]} will display it to the user.

We could do the same with <ErrorMessage/>. With this, we only need to tell it the name of the dependent field to watch:

<ErrorMessage name="email">
  {errMsg => <span className="errorMessage">{errMsg}</span>}
</ErrorMessage>

Generating a username from an email address

Imagine a form that automatically generates a username for your users based on their email address. In other words, whatever the user types into the email input gets pulled out, stripped of @ and everything after it, and leaves us with a username with what’s left.

For example: [email protected] produces @jane.

Formik exposes helpers that can “intercept” its functionality and lets us perform some effects.In the case of auto-generating a username, one way will be through Formik’s setValues:

onSubmit(values) {
  // We added a `username` value for the user which is everything before @ in their email address.
  setValues({
    ...values,
    username: `@${values.email.split("@")[0]}`
  });
}

Type in an email address and password, then submit the form to see your new username!

Wrapping up

Wow, we covered a lot of ground in a short amount of space. While this is merely the tip of the iceberg as far as covering all the needs of a form and what Formik is capable of doing, I hope this gives you a new tool to reach for the next time you find yourself tackling forms in a React application.

If you’re ready to take Formik to the next level, I’d suggest looking through their resources as a starting point. There are so many goodies in there and it’s a good archive of what Formik can do as well as more tutorials that get into deeper use cases.

Good luck with your forms!



Source link

String Interpolation result
Strategy

Data Binding in Angular – DZone Web Dev


In this post, we’ll focus on four important types of data binding in Angular applications. This will be a quick and easy demonstration of those types.

Before we get started, it will be nice to talk about what is data binding and what are those types.

Data binding is a way to communicate between the application UI and the data which comes from a component — the part you coded, your business logic. In Angular, binding is a dynamic change of the data and the view.

Types of data binding include: String Interpolation, Event binding, Property binding, and Two-Way Binding.

String Interpolation is the easiest way to output a string to your view. We have to use {{ }} double curly braces syntax for that binding type. We are able to code TypeScript expressions between those curly braces. We can type anything, as long as a String is the result of the expressions ouptut. It can be a String variable or a method, which has a return type of String. Also, other types, like Integer, are able to be represented with String Interpolation. As a result, those types can be interpreted as Strings at the end.

Unfortunately, we are not able to code multiline expressions between those double curly braces (e.g. if-else conditions). This can be considered a a disadvantage of interpolation.

We define our variables, userName,  additionalMsg, selectedCondition, and selectedTempLevel in our component.ts file, as shown below, and we can control their changes and get their value by two-way binding in our other HTML components.

String Interpolation result

The second one is property binding. With proprety binding, we can bind native HTML attributes to a property. Angular dynamically gives us a chance to change for that. We send data from our component to the view. It is an example of one-way data binding, and we can use square brackets in our code “[]“ to express it.

Property binding example

Property binding example

Parentheses signals event binding. That type is also one-way binding, but in contrast to property binding, we send the data from the view to the component (e.g. by clicking on a button). We can basically bind to all properties and events. We often call methods on event binding.

additionalMsg only changes its value according to click event on button.

Two-way binding is a little bit different, but it can be defined as the combination of two other binding types, event and property binding. This type of binding allows for the continuous synchronization of data.

Let’s see how this works in a basic application.

Before we start to code, for a nice presentation of our pages we can download Bootstrap for our project with the command below. After installation, we can see our Bootstrap library under node_modules.

We need one more step to enable Bootstrap in our project. We have to add our Bootstrap css file in the angular.json file like below. Then we can use it.

As mentioned, String Interpolation used to output our results as a string in our paragraph element.

Button is an HTML element that has some attributes like disabling it. We used property binding and bind this attribute to a property named allowClick, which also triggers an input element’s change event.

Event binding can be observed in our input elements’ change event and also on our buttons’ click event. When we click our button, a method is triggered. Also, when we start to type something in our input element, it also makes a decision to disable/enable our button. 

We can see two-way binding on our comboBox element. It is able to get a selected value and output it in our paragraph. If we initialize a value in our component, we are able to see the dynamic change of two-way binding. 

app.component.html

app.component.ts

 Two-way data binding example

Two-way data binding example

Two-way data binding example



Source link

Post image
Strategy

Website Design & audio tools implementation advice needed pl…


Hello everyone! Thank you in advance for taking the time to read this and any help that you can offer. I’m a musician and have been trying to figure out how to make a website along these lines: Tabletopaudio.com

I understand the front page’s audio gallery portion and the playlist player embedded at the top of the main page. On the top right they have a soundpad tool. Their soundpad tool perplexes me. I do not understand how to implement anything similar. They have Preset soundpads. What i like most is the ability to make your own custom soundpads from the sounds of their preset soundpads. Scroll to the bottom of the soundpad drop down menu, you can save custom batches called soundpads, of up to 40 sounds, on 1 page. Each can be triggered or looped and has a volume slider. There is also a drop down selection to select multiple time intervals of recurrence, while looping. You cannot load your own sounds and must choose from the choices they give you, but that’s the way I’d like to implement it as well. You can also select a bunch of sounds, set the volume and intervals of recurrence and then save that as a “scene.” You name the “scene” and it creates a bar at the top of your soundpad, with a button that instantly loads, that “scene”. This is is awesome and key. It lets you create atmospheres, from pieces and then save the settings for later recall making it easy to use and diverse. You can create as many scenes as can fit, on the top bar of the page and still have access to the pieces/sounds/soundpad, on the same page. You can create as many soundpads, as you like and adjust them instantly, on the fly. You can also open multiple browser windows having as many soundpads and scenes, as you want, running concurrently. It is in Beta and the biggest drawbacks i see so far are the GUI is not good, a 40 sound limitation and that i think it saves the settings locally, in your browser somewhere because if you clear your browser data, or use another computer, you have to start all over again and lose all your saved scenes, soundpads, etc.. It’s also a Beta so it’s finicky when it wants to be.

My experience with web design was from many years ago so I’m a pretty rusty, but i do know and understand basic HTML. I have generally been using WordPress for my websites as i said, it’s been a long time since I’ve studied new web development and so much has changed! I used to make sites sometimes with Dreamweaver but mostly straight HTML many, many moons ago. It’s been a long time. My interest in it has sparked again and i thought I’d combine some of my favorite interests like Audio production, Web Dev and Role Playing Games. I would love to do something like this: Syrinscape.com but i know that would require expertise I’m not capable of and probably a team of people. The first site seems much more within my reach. I would like to be able to make a grid like the colored pads on this: hardware music production tool:

Post image

8X8 Grid of Buttons

I’d like the GUI to be an 8X8 grid on the website though. These “buttons” should trigger or loop, based on how you choose to set it, like tabletop audio. A key function I’d love is to be able to have users be able to create an account on the site, save their “soundpads,” “scenes”, their settings and be able to recall them anytime they login! I know this is asking a lot but i can’t find any direction as to how to do any of this. If i have to switch to something else like Joomla, learn some specific type of coding to understand how to do this or however…I will. Problem is I’m stumped and can’t find any info. I’d appreciate being pointed in the right direction, any help at all. Ideally it’d be a WordPress plugin but i looked everywhere and can’t find anything even remotely close to it. I understand I’ll probably have to custom make it and am ready for the challenge. I need direction so i don’t waste a lot of time spinning my wheels learning unnecessary things to do specifically this. Thank you again in advance for taking the time to read this. If you can help, thank you all that much more.



Source link

Is this a card or is it something else?
Strategy

Is this a card or is it something else?


Is this a card or is it something else?

Quick question. Is this what's called a card, or is there anything else thats better suited?
This is a old design, and i'm thinking of doing some improvements in the design because it's pretty boring at the moment.

https://preview.redd.it/tjef561vfev41.png?width=2336&format=png&auto=webp&s=38cc55706d380e772c66aba0aa5acd4fe92363bb

submitted by /u/kaizokupuffball
[comments]



Source link