MH Poem Generator
Strategy

Poem Generator Web Application With Keras, React, and Flask


Introduction

Natural Language Processing (NLP) is an exciting branch of machine learning and artificial intelligence, as it is applied in speech recognition, language translation, human-computer interaction, sentiment analysis, etc. One of the interesting areas is text generation, and of particular interest to me, is poem generation.

In this article, I describe a poem generator web application, which I built using Deep Learning with Keras, Flask, and React. The core algorithm is from TensorFlow available in their notebook. The data it needs is an existing set of poems. For my application, the data are in three text files with:

  1. Poems of Erica Jong.
  2. Poems of Lavanya Nukavarapu.
  3. Poems of Erica Jong and Lavanya Nukavarapu together.

Keras

The TensorFlow example notebook has the model building and training code, as well as the prediction code. I took the model building and training code into my notebooks and executed it on Google Colab to generate the models for each of the three datasets.

The neural network code is as follows:

The network starts with a Sequential model, which is used, as in our case, where each layer has exactly one input tensor and one output tensor. The first layer is the Embedding layer that turns ‘positive integers (indexes) into dense vectors of fixed size.’ The second layer is a Bidirectional layer that’s a wrapper over 150 instances of LSTMs, which are components of Recurrent Neural Networks. Next, we have a Dense layer as the output layer, which applies the softmax activation function on the propagating data. The model is compiled using the categorical_crossentropy function to compute loss between labels and prediction, and ‘adam‘ optimizer. Finally, it is trained for 150 epochs by calling the fit method.

To this base code, I added two callbacks:

  1. ModelCheckpoint for saving the model only if its accuracy in the current epoch is higher than that in the previous epoch. So, by the end of the propagation, we have the model with the highest accuracy.
  2. ReduceLROnPlateau for monitoring the loss function and reducing learning rate by a factor of 20% if learning stagnates, that is if no improvement is seen for 1 epoch.

Python

The prediction part of the TensorFlow example is run-time Flask code in my application. I encapsulated the code in a class called PoemGenerator. This class has the following key methods:

__init__

The constructor takes as arguments, a string for the seed_data_text, a list of strings called data, which is nothing but the cleaned poem corpus, and a model. These argument values are copied into instance variables of the same name. The instance variable max_sequence_len is set to the maximum length of the n_gram sequences that are generated from each line after converting their text to sequences of numbers and left-padded with zeros.

generate_poem

This method has the main functionality of poem generation. The seed_text is converted to a numeric sequence, left padded with zeros, and passed to the model to predict the next word. If the predicted word and its index are present in the tokenizer, which is an instance variable, the word is accepted and appended to the seed text. Now the seed text with the appended word becomes the new seed text. It is passed to the model to predict the next word, and the process continues 100 times, resulting in a string output.

strToPoem

This method takes the generated string from the previous method and gives it the shape of a poem. It first removes unnecessary stuff like a word having just a backquote or a backslash. Then it removes adjacent duplicate words. In the third step, it takes a random number between 5 and 8 and slices those many words out of the string, and stores them as the first string element in a list. Effectively, this is the first line of the generated poem. This process of slicing random lengths (between 5 and 8) of words from the string is iterated until all the words in the generated string are removed. The poem is now transformed from a string to a list of strings.

Next, there are two clean up steps:

  1. If the last line has fewer than 5 words, it is dropped. This task is repeated until we have the last line that has 5 words or more.
  2. If the last word of the last line has fewer than 4 characters, then that word is dropped.

Finally, the poem is returned as a list of strings.

The code of strToPoem method is given below:

React

In the UI, the user has to:

  • Enter a set of words in a text field as seed text;
  • Select a poet, and;
  • Click a button (‘Generate Poem’).

MH Poem Generator

I encapsulated the text field, select drop-down, and button as one React component called PoemComponent. The code is in the file Poem.js and is sourced as a Babel typescript. Babel compiles it into browser-compatible JavaScript.

Flask serves public assets from the directory static, so Poem.js is placed in that folder. Since this is a simple screen, I did not use utilities like create-react-app or npm or Node runtime.

PoemComponent’s key functions and functionalities are given below.

The constructor sets the state with two variables: poem_header and poem, both arrays. The render function has:

  1. An h5 label.
  2. An input text field with ID ‘seed_text’ and a placeholder text ‘Enter seed text: 3 to 5 words.’
  3. A select element with ID ‘poet‘, the first option as ‘--  Please chose a poet  --‘ and the names ‘Erica Jong,‘ ‘Lavanya Nukavarapu,’ and ‘Erica+Lavanya‘ as the subsequent options.
  4. A button with the text ‘Generate Poem.’

The button’s onClick event is bound to the component and invokes the function getPoem

getPoem

This function collects the seed_text and poet’s name by calling document.getElementById and uses them to concatenate an URL. It invokes fetch with this URL having the endpoint ‘/getpoem‘ on the Flask application. After the response is received, the function updates the state by setting the values of poem_header and poem. This triggers the poem_header and poem values to be updated in divs with the IDs ‘generated_poem_header‘ and ‘generated_poem.’

Finally, the last two lines in Poem.js render PoemComponent at the ‘poem_container‘ div in index.html.

Given below are important snippets of PoemComponent code:

Flask

app.py

The root endpoint (‘/‘) is the index method that just serves index.html from the template folder.
This file has the entire backend run time code. At startup time, three text files, containing the poetry datasets are read into a list and all words are converted to lower case. This data list is one of the arguments passed to the constructor of PoemGenerator.

generatePoem

This function is invoked at the endpoint location ‘/getpoem.’ From the GET request parameters, it grabs the user-entered seed_text and poet name. It uses the seed_text, the correct data list, and model (based on poet name) to instantiate a PoemGenerator object. On this object, it calls the generate_poem method to generate the poem and stores it in the list ‘poem’. It also calls the makeHeader method to create the metadata of the poem which is stored in the list poem_header. Both these lists are returned as JSON to the client browser.

Repository and Deployment

The code of this application is available in my Github repository mh-poem-generator.

I deployed the application on a cloud Ubuntu-18.04 server. Since TensorFlow 2.2.0 is required, I installed conda and used its version of gunicorn to run it as a systemd service. The application is collocated with other Flask and Ruby on Rails applications and served via Nginx.

The systemd configuration is given below:

/etc/systemd/system/pg.service

The Nginx configuration is as follows: 

/etc/nginx/sites-enabled/mh_sites

You can access the application at https://mahboob.xyz/pg

Conclusion

As of now, the generated poems have the shape of poems but don’t make much sense as actual poems. Sometimes a few lines come out well with good figurative expressions, but that’s all. To improve the poem quality, I will have to add additional layers to the neural network, fine-tune the parameters and enrich the poem lines to better sentences, like how MontyLingua does.



Source link

Getting Started With Kafka and Rust (Part 1)
Strategy

Getting Started With Kafka and Rust (Part 1)


This is a two-part series to help you get started with Rust and Kafka. We will be using the rust-rdkafka crate which itself is based on librdkafka (C library).

In this post, we will cover the Kafka Producer API.

Initial Setup

Make sure you install a Kafka broker — a local setup should suffice. Of course you will need to have Rust installed as well — you will need version 1.45 or above

Before you begin, clone the GitHub repo:

Check the Cargo.toml file:

...
[dependencies]
rdkafka = { version = "0.25", features = ["cmake-build"] }
...

Note on the cmake-build feature

rust-rdkafka provides a couple of ways to resolve the librdkafka dependency. I chose static linking, wherein librdkafka was compiled. You could opt for dynamic linking to refer to a locally installed version though.

For more, please refer to this link

Ok, let’s start off with the basics.

Simple Producer

Here is a simple producer based on BaseProducer:

The send method to start producing messages – it’s done in tight loop with a thread::sleep in between (not something you would do in production) to make it easier to track/follow the results. The key, value (payload) and the destination Kafka topic is represented in the form of a BaseRecord

You can check the entire code in the file src/1_producer_simple.rs

To Test if the Producer Is Working …

Run the program:

  • simply rename the file src/1_producer_simple.rs to main.rs
  • execute cargo run

You should see this output:

sending message
sending message
sending message
...

What’s going on? To figure it out — connect to your Kafka topic (I have used rust as the name of the Kafka topic in the above example) using the Kafka CLI consumer (or any other consumer client e.g. kafkacat). You should see the messages flowing in.

For example:

Producer Callback

We are flying blind right now! Unless we explicitly create a consumer to look at our messages, we have no clue whether they are being sent to Kafka. Let’s fix that by implementing a ProducerContext (trait) to hook into the produce event — it’s like a callback.

Start by creating a struct and an empty implementation for the ClientContext trait (this is mandatory).

Now comes the main part where we implement the delivery function in the ProducerContext trait.

We match against the DeliveryResult (which is a Result after all) to account for success (Ok) and failure (Err) scenarios. All we do is simply log the message in both cases, since this is just an example. You could do pretty much anything you wanted to here (don’t go crazy though!)

We’ve ignored DeliveryOpaque which is an associated type of the ProducerContext trait

We need to make sure that we plug in our ProducerContext implementation. We do this by using the create_with_context method (instead of create) and make sure by providing the correct type for BaseProducer as well.

let producer: BaseProducer<ProduceCallbackLogger> = ClientConfig::new().set(....)
...
.create_with_context(ProduceCallbackLogger {})
...

How Does the “Callback Get Called”?

Ok, we have the implementation, but we need a way to trigger it! One of the ways is to call flush on the producer. So, we could write our producer as such:

  • add producer.flush(Duration::from_secs(3));, and
  • comment the sleep (just for now)

Hold On, We Can Do Better!

The send method is non-blocking (by default) but by calling flush after each send, we have now converted this into a synchronous invocation – not recommended from a performance perspective.

We can improve the situation by using a ThreadedProducer. It takes care of invoking the poll method in a background thread to ensure that the delivery callback notifications are delivered. Doing this is very simple — just change the type from BaseProducer to ThreadedProducer!

# before: BaseProducer<ProduceCallbackLogger>
# after: ThreadedProducer<ProduceCallbackLogger>

Also, we don’t need the call to flush anymore.

...
//producer.flush(Duration::from_secs(3));
//println!("flushed message");
thread::sleep(Duration::from_secs(3));
...

The code is available in src/2_threaded_producer.rs

Run the Program Again

  • Rename the file src/2_threaded_producer.rs to main.rs and
  • execute cargo run

Output:

sending message
sending message
produced message with key key-1 in offset 6 of partition 2
produced message with key key-2 in offset 3 of partition 0
sending message
produced message with key key-3 in offset 7 of partition 

As expected, you should be able to see the producer event callback, denoting that the messages were indeed sent to the Kafka topic. Of course, you can connect to the topic directly and double-check, just like before:

To try a failure scenario, try using an incorrect topic name and notice how the Err variant of the delivery implementation gets invoked.

Sending JSON Messages

So far, we were just sending Strings as key and values. JSON is a commonly used message format, let’s see how to use that.

Assume we want to send User info which will be represented using this struct:


We can then use serde_json library to serialize this as JSON. All we need is to use the custom derives in serdeDeserialize and Serialize

Change the producer loop:

  • Create a User instance
  • Serialize it to a JSON string using to_string_pretty
  • Include that in the payload

you can also use to_vec (instead of to_string()) to convert it into a Vec of bytes (Vec<u8>)

To Run the Program…

  • Rename the file src/3_JSON_payload.rs to main.rs, and
  • execute cargo run

Consume from the topic:

You should see messages with a String key (e.g. user-34) and JSON value:

Is There a Better Way?

Yes! If you are used to the declarative serialization/de-serialization approach in the Kafka Java client (and probably others as well), you may not like this “explicit” approach. Just to put things in perspective, this is how you’d do it in Java:

Notice that you simply configure the Producer to use KafkaJsonSchemaSerializer and the User class is serialized to JSON

rust-rdkafka provides something similar with the ToBytes trait. Here is what it looks like:

Self-explanatory, right? There are existing implementations for String, Vec<u8> etc. So you can use these types as key or value without any additional work – this is exactly what we just did. But the problem is the way we did it was “explicit” i.e. we converted the User struct into a JSON String and passed it on.

What if we could implement ToBytes for User?

You will see a compiler error:

cannot return value referencing local variable `b`
returns a value referencing data owned by the current function

For additional background, please refer to this [GitHub issue] (https://github.com/fede1024/rust-rdkafka/issues/128). I would happy to see an example other which can work with ToBytes – please drop in a note if you’ve inputs on this!

TL;DR is that it’s best to stick to the “explicit” way of doing things unless you have a ToBytes implementation that “does not involve an allocation and cannot fail”.

Wrap Up

That’s it for the first part! Part 2 will cover topics around Kafka consumers.



Source link

r/web_design - Too many links in my header ? (or, how to manage a situation with many links in the header)
Strategy

Too many links in my header ? (or, how to manage a situation…


Hi everyone,

I am encountering a web design / UX / UI problem on my website regarding the menu.

Here is how it looks like :

r/web_design - Too many links in my header ? (or, how to manage a situation with many links in the header)

As you can see, the menu already includes a good number of items :

  • Home page

  • Blog (with the categories appearing on hover)

  • Guides (links to famous articles which people actually really need)

  • Formation (= Training in French, which is a product I’m selling)

  • Contact

  • Connexion = Log In — which is replaced by the user’s first name with a fontawesome icon when he’s logged in

  • Search icon

I am willing to widen the scope of things I am offering, which would mean adding at least two links :

As you can see, the space between the logo and the menu is going to get very quickly full, and as in terms of responsivity it means that I would have to display a burger menu very quickly if the window is just slightly smaller, which is not something I like (people aren’t always using full-screen windows when browsing on their computer).

Consequently, I am a bit at a loss regarding what I should do :

  • Should I move the menu to a lower line (but that would push back more page content below the fold) ?

  • I don’t really want to group menu items together as each would be very different, and I hardly see what can of grouping would make sense.

So I came here to ask for your ideas and point of view of the situation 🙂

Thanks a lot by advance,

Adrien



Source link

C#: Pitfalls in String Pool
Strategy

C#: Pitfalls in String Pool


As software developers, we always want our software to work properly. We’ll do everything to improve the software quality. To find the best solution, we are ready to use parallelizing or applying any various optimization techniques. One of these optimization techniques is the so-called string interning. It allows users to reduce memory usage. It also makes string comparison faster. However, everything is good in moderation. Interning at every turn is not worth it. Further, I’ll show you how not to slip up with creating a hidden bottleneck in the form of the String.Intern method for your application.

In case you’ve forgotten, let me remind you that string is a reference type in C#. Therefore, the string variable itself is just a reference that lies on the stack and stores an address. The address points to an instance of the String class located on the heap.

There are several ways to calculate how many bytes a string object takes on the heap: the version by John Skeet and the version by Timur Guev (the last article is in Russian). In the picture above, I used the second option. Even if this formula is not 100 % true, we can still estimate the size of string objects. For example, about 4.7 million lines (each is 100 characters long) are enough to take up 1 GB of RAM. Let’s say there’s a large number of duplicates among the strings in a program. So, it’s just worth using the interning functionality built into the framework. Now, why don’t we briefly recap what is string interning?

String Interning

The idea of string interning is to store only one instance of the String type in memory for identical strings. When running an app, the virtual machine creates an internal hash table, called the interning table (sometimes it is called String Pool). This table stores references to each unique string literal declared in the program. In addition, using the two methods described below, we can get and add references to string objects to this table by ourselves. If an application contains numerous strings (which are often identical), it makes no sense to create a new instance of the String class every time. Instead, you can simply refer to an instance of the String type that has already been created on the heap. To get a reference to it, access the interning table. The virtual machine itself interns all string literals in the code (to find more about interning tricks, check this article). We may choose one of two methods: String.Intern and String.IsInterned.

The first one takes a string as input. If there’s an identical string in the interning table, it returns a reference to an object of the String type that already exists on the heap. If there’s no such string in the table, the reference to this string object is added to the interning table. Then, it is returned from the method. The IsInterned method also accepts a string as input and returns a reference from the interning table to an existing object. If there’s no such object, null is returned (everyone knows about the non-intuitive return value of this method).

Using interning, we reduce the number of new string objects by working with existing ones through references obtained via the Intern method. Thus, we do not create a large number of new objects. So, we save memory and improve program performance. After all, many string objects, references to which quickly disappear from the stack, can lead to frequent garbage collection. It will negatively affect the overall program performance. Interned strings won’t disappear up to the end of the process, even if the references to these objects are no longer in the program. This thing is worth paying attention to. Using interning to reduce memory consumption can produce the opposite effect.

Interning strings can boost performance when comparing these very strings. Let’s take a look at the implementation of the String.Equals method:

Before calling the EqualsHelper method, where a character-by-character comparison of strings is performed, the Object.ReferenceEquals method checks for the equality of references. If the strings are interned, the Object.ReferenceEquals method returns true when the strings are equal (without comparing the strings themselves character-by-character). Of course, if the references are not equal, then the EqualsHelper method will be called, and the subsequent character-by-character comparison will occur. After all, the Equals method does not know that we are working with interned strings. Also, if the ReferenceEquals method returns false, we know that the compared strings are different.

If you are sure that the input strings are interned at a specific place in the program, then you can compare them using the Object.ReferenceEquals method. However, it’s not the greatest approach. There is always a chance that the code will change in the future. Also, it may be reused in another part of the program. So, non-interned lines can get into it. In this case, when comparing two identical non-interned strings via the ReferenceEquals method, we will assume that they are not identical.

Interning strings for later comparison seems justified only if you plan to compare interned strings quite often. Remember that interning an entire set of strings also takes some time. Therefore, you shouldn’t perform it to compare several instances of strings once.

Well, we revised what string interning is. Now, let’s move on to the problem I’ve faced.

Briefly on How it Started

In our bug tracker, there was a task created long ago. It required some research on how parallelizing the C++ code analysis can save analysis time. It would be great if the PVS-Studio analyzer worked in parallel on several machines when analyzing a single project. I chose IncrediBuild as the software that allows such parallelization. IncrediBuild allows you to run different processes in parallel on machines located on the same network. For example, you can parallelize source files compiling on different company machines (or in a cloud). Thus, we save time on the building process. Game developers often use this software.

Well, I started working on this task. At first, I selected a project and analyzed it with PVS-Studio on my machine. Then, I ran the analysis using IncrediBuild, parallelizing the analyzer processes on the company’s machines. At the end, I summed up the results of such parallelization. So, having positive results, we’ll offer our clients such solutions to speed up the analysis.

I chose the Unreal Tournament project. We managed to persuade the programmers to install IncrediBuild on their machines. As a result, we had the combined cluster with about 145 cores.

I analyzed the Unreal Tournament project using the compilation monitoring system in PVS-Studio. So, I worked as follows: I ran the CLMonitor.exe program in monitor mode and performed a full build of Unreal Tournament in Visual Studio. Then, after building process, I ran CLMonitor.exe again, but in the analysis launch mode. Depending on the value specified in the PVS-Studio settings for the ThreadCount parameter, CLMonitor.exe simultaneously runs the corresponding number of PVS-Studio.exe child processes at the same time. These processes are engaged in the analysis of each individual source C++ file. One PVS-Studio.exe child process analyzes one source file. After the analysis, it passes the results back to CLMonitor.exe.

Everything is easy: in the PVS-Studio settings, I set the ThreadCount parameter equal to the number of available cores (145). I run the analysis getting ready for 145 processes of PVS-Studio.exe executed in parallel on remote machines. IncrediBuild has Build Monitor, a user-friendly parallelization monitoring system. Using it, you can observe the processes running on remote machines. The same I observed in the process of analysis:

It seemed that nothing could be easier. Relax and watch the analysis process. Then simply record its duration with IncrediBuild and without. However, in practice, it turned out to be a little bit complicated…

The Problem, Its Location, and Solution

During the analysis, I could switch to other tasks. I also could just meditate looking at PVS-Studio.exe running in the Build Monitor window. As the analysis with IncrediBuild ended, I compared its duration with the results of the one without IncrediBuild. The difference was significant. However, the overall result could have been better. It was 182 minutes on one machine with 8 threads and 50 minutes using IncrediBuild with 145 threads. It turned out that the number of threads increased by 18 times. Meanwhile, the analysis time decreased by only 3.5 times. Finally, I glimpsed the result in the Build Monitor window. Scrolling through the report, I noticed something weird. That’s what I saw on the chart:

I noticed that PVS-Studio.exe executed and completed successfully. But then for some reason, the process paused before starting the next one. It happened again and again. Pause after pause. These downtimes led to a noticeable delay and did their bit to prolong the analysis time. At first, I blamed IncrediBuild. Probably it performs some kind of internal synchronization and slows down the launch.

I shared the results with my senior colleague. He didn’t jump to conclusions. He suggested looking at what’s going on inside our CLMonitor.exe app right when downtime appears on the chart. I ran the analysis again. Then, I noticed the first obvious “failure” on the chart. I connected to the CLMonitor.exe process via Visual Studio debugger and paused it. Opening the Threads, my colleague and I saw about 145 suspended threads. Reviewing the places in the code where the execution paused, we saw code lines with similar content:

What do these lines have in common? Each of them uses the String.Intern method. And it seems justified. Because these are the places where CLMonitor.exe handles data from PVS-Studio.exe processes. Data is written to objects of the ErrorInfo type, which encapsulates information about a potential error found by the analyzer. Also, we internalize quite reasonable things, namely paths to source files. One source file may contain many errors, so it doesn’t make sense for ErrorInfo objects to contain different string objects with the same content. It’s fair enough to just refer to a single object from the heap.

Without a second thought, I realized that string interning had been applied at the wrong moment. So, here’s the situation we observed in the debugger. For some reason, 145 threads were hanging on executing the String.Intern method. Meanwhile, the custom task scheduler LimitedConcurrencyLevelTaskScheduler inside CLMonitor.exe couldn’t start a new thread that would later start a new PVS-Studio.exe process. Then, IncrediBuild would have already run this process on the remote machine. After all, from the scheduler’s point of view, the thread has not yet completed its execution. It performs the transformation of the received data from PVS-Studio.exe in ErrorInfo, followed by string interning. The completion of the PVS-Studio.exe process doesn’t mean anything to the thread. The remote machines are idle. The thread is still active. Also, we set the limit of 145 threads, which does not allow the scheduler to start a new one.

A larger value for the ThreadCount parameter would not solve the problem. It would only increase the queue of threads hanging on the execution of the String.Intern method.

We did not want to remove interning at all. It would increase the amount of RAM consumed by CLMonitor.exe. Eventually, we found a fairly simple and elegant solution. We decided to move interning from the thread that runs PVS-Studio.exe to a slightly later place of code execution (in the thread that directly generates the error report).

As my colleague said, we managed to make a very accurate edit of just two lines. Thus, we solved the problem with idle remote machines. So, we ran the analysis again. There were no significant time intervals between PVS-Studio.exe launches. The analysis’ time decreased from 50 minutes to 26, that is, almost twice. Now, let’s take a look at the overall result that we got using IncrediBuild and 145 available cores. The total analysis time decreased by 7 times. It’s far better than by 3.5 times.

String.Intern – Why Is It so Slow? The CoreCLR code Review

It’s worth noting that once we saw the threads hanging at the places where we call the String.Intern method, we almost instantly thought that under the hood this method has a critical section with some kind of lock. Since each thread can write to the interning table, there must be some synchronization mechanism inside the String.Intern method. It prevents several threads from overwriting each other’s data. To confirm my assumptions, we decided to look at the implementation of the String.Intern method on the reference source. We noticed that inside our interning method there had been a call to Thread.GetDomain().GetOrInternString(str) method. Well, take a look at its implementation:

Now, it’s getting more interesting. This method is imported from some other build. Which one? Since the CLR VM itself does the strings interning, my colleague guided me directly to the .NET runtime repository. After downloading the repository, we went to the CoreCLR solution. We opened it and viewed the entire solution. There we found the GetOrInternString method with the appropriate signature:

So, we saw a call to the GetInternedString method. In the body of this method, we noticed the following code:

The execution thread gets into the else branch only if the method that searches for a reference to the String object (the GetValue method) in the interning table returns false. Let’s move on to the code in the else branch. Here we are interested in the line where an object of the CrstHolder type named gch is created. Now, we turn to the CrstHolder constructor and see the following code:

We notice the call to the AcquireLock method. It’s getting better. Here’s the code of the AcquireLock method:

In fact, that’s the entry point to the critical section – the call to the Enter method. After I’d read the comment “Acquire the lock”, I had no doubts that this method deals with locking. I didn’t see much point in diving further into the CoreCLR code. So, we were right. When a new entry is entered into the interning table, the thread enters the critical section, forcing all other threads to wait for the lock to release. Just before calling the m_StringToEntryHashTable->InsertValue method, the object of the CrstHolder type comes out, and therefore the critical section appears.

The lock disappears immediately after we exit the else branch. In this case, the destructor which calls the ReleaseLock method is called for the gch object:

When there are few threads, the downtime can be small. But when their number increases, for example to 145 (as happened with IncrediBuild), each thread that tries to add a new entry to the internment table temporarily blocks the other 144 threads that also try to add a new entry to it. The results of these locks we observed in the Build Monitor window.

Conclusion

I hope that this case will help you to apply string interning more carefully and thoughtfully, especially in multithreaded code. After all, these locks, adding new records to the internment table, may become a bottleneck, as in our case. It’s great that we were able to find out the truth and solve the detected problem. That made the analyzer work faster.

Thank you for reading.



Source link

Can
Strategy

Can’t access objects pushed into an array?


Can't access objects pushed into an array?

Hey guys im having trouble accessing these objects after i've pushed them into an array. How can I get access to the 1st province object? homes[0][0].province gives me undefined. Any help would be much appreciated!

https://preview.redd.it/5w7zwy3of8s61.png?width=1076&format=png&auto=webp&s=feb50c5dfdd2f5461980d9692d0896ecda8d9bb9

submitted by /u/podkolzin
[comments]



Source link

r/graphic_design - How to keep consistent spacing of elements within boxes?
Strategy

How to keep consistent spacing of elements within boxes? : g…


I am learning to use Affinity Publishing and I need to align text within these boxes. The problem I have is I don’t know how to always keep the text box the same distance from the top and left sides of the box.

Any ideas or tips? In the past, I would manually draw out a small line and lay that out between elements to keep them consistent. But, I feel there must be an easier way.

I also don’t know how to properly ask this question, is there a term for what I’m looking for?

r/graphic_design - How to keep consistent spacing of elements within boxes?

The text here are not the same distance from the box they are contained in. How do I make them consistently spaced?



Source link

Visual components of Clarity library
Strategy

The Most Popular Angular UI Libraries To Try in 2021


Introduction

Angular is one of the most popular JavaScript web frameworks. Angular’s approach to organizing the programmer’s work involves hiding the execution of various service operations in the depths of the framework, giving the developer convenient tools built on the basis of internal mechanisms. Angular, like React, encourages the use of components and splitting the application interface into small reusable chunks.

We’ve made a list of libraries for Angular, which you may find useful in your next or current project. Most of them are designed for Angular2+, however, some of them are suitable for older versions of the framework. One of the advantages here is that you can extract individual components from Angular libraries and use them in a project without installing the entire library.

Clarity

Clarity is an open-source design system created by VMware that has 6.2K stars on GitHub. It is a combination of UX design guidelines, an HTML/CSS framework, and Angular components. Clarity provides developers with a rich set of high-performance data-bound components. A huge number of interactive elements can be implemented by using this library. Among them, there are accordion, date picker, login, signpost, timeline, toggle, and many others.

Visual components of Clarity library

Visual components of Clarity library (source: https://clarity.design/) 

Login component of Clarity library

Login component of Clarity library (source: https://clarity.design/) 

Timeline component of Clarity library

Timeline component of Clarity library (source: https://clarity.design/) 

Material

Material is an official Angular component library that implements Google’s Material Design concepts. This library has 21.2K stars on GitHub. These UI components can be thought of as code examples, written according to the guidelines of the Angular development team. Among the interactive elements that can be implemented by using this library, there are autocomplete, form field, progress spinner, slider, stepper, tabs, and others.

Visual components of Material library

Visual components of Material library (source: https://material.angular.io/) 

NGX Bootstrap

The NGX Bootstrap library has about 5.3K stars on GitHub. Here you can find basic components that implement the capabilities of the Bootstrap template written specifically for Angular. It is suitable for developing desktop and mobile applications and is designed with extensibility and adaptability in mind. One of the features of this library is a variety of element forms. Among the element forms that could be added to your application, accordion with custom HTML, various forms of carousels, pager pagination, and different ratings deserve special mention.

Custom HTML component of NGX Bootstrap library

Custom HTML component of NGX Bootstrap library (source: https://valor-software.com/ngx-bootstrap/) 

Basic carousel component of NGX Bootstrap library

Basic carousel component of NGX Bootstrap library (source: https://valor-software.com/ngx-bootstrap/) 

Prime NG

Prime NG is a library that includes an extensive set of more than 70 UI components. At the same time, different types of styling are available here, for example, Material Design and Flat Design. Prime NG has approximately 6.6K stars on GitHub and is used by companies such as eBay, Fox, and many others. All this suggests that this library is worth the attention of those who are looking for a suitable set of components for their project. The library also includes the following features: different forms of fields, various buttons, menu forms, messages, toasts, a timeline, and many others.

Forms of Field component of Prime NG library

Forms of Field component of Prime NG library (source: https://www.primefaces.org/primeng/) 

Forms of Button component of Prime NG library

Forms of Button component of Prime NG library (source: https://www.primefaces.org/primeng/)

Forms of Menu component of Prime NG library

Forms of Menu component of Prime NG library (source: https://www.primefaces.org/primeng/) 

The message, Toast and Timeline components of Prime NG library

The message, Toast and Timeline components of Prime NG library (source: https://www.primefaces.org/primeng/) 

NG Bootstrap

NG Bootstrap, a popular library that includes Bootstrap 4 style components for Angular, has around 7.7K stars on GitHub. It serves as a replacement for the angular-UI Bootstrap project, which is no longer supported. NG Bootstrap has a high level of test coverage and no third-party JS dependencies. The features that deserve to be highlighted are datepicker with various options, different progress bars, basic table stylings, different toasts, and others.

Datepicker component of NG Bootstrap library

Datepicker component of NG Bootstrap library (source: https://ng-bootstrap.github.io/) 

Progress bar component of NG Bootstrap library

Progress bar component of NG Bootstrap library (source: https://ng-bootstrap.github.io/)

Table component of NG Bootstrap library

Table component of NG Bootstrap library (source: https://ng-bootstrap.github.io/)

Forms of Toast component of NG Bootstrap library

Forms of Toast component of NG Bootstrap library (source: https://ng-bootstrap.github.io/)

Teradata Covalent UI Platform

Teradata Covalent UI Platform has over 2.2K stars on GitHub. This library allows making code more easy-to-read with the help of style guides and design patterns. Thanks to the present configuration of this platform, developers can concentrate on the app functionality rather than on customization, and make the development process faster.

The Atomic Design Principles involve modular design and unite smaller components into bigger ones. This platform successfully follows these principles and, as an example, unites buttons into forms. Among other interesting features, there are user profiles, breadcrumbs, steppers, a text editor, and others.

User profile component of Teradata Covalent UI Platform library

User profile component of Teradata Covalent UI Platform library (source: https://teradata.github.io/covalent/v3/#/)

Stepper component of Teradata Covalent UI Platform library

Stepper component of Teradata Covalent UI Platform library (source: https://teradata.github.io/covalent/v3/#/)

Text editor component of Teradata Covalent UI Platform library

Text editor component of Teradata Covalent UI Platform library (source: https://teradata.github.io/covalent/v3/#/)

Nebular

Nebular has 6.9K stars on GitHub. It is a customizable component library that makes the application development process much simpler. Nebular has six visual themes and a big number of different customizable components. Also, it is worth mentioning that it has security modules that offer authentication and security layers for APIs. Among its components, there are steppers, spinners, chats, registration forms, and others.

Stepper component of Nebular library

Stepper component of Nebular library (source: https://akveo.github.io/nebular/)

Spinner component of Nebular library

Spinner component of Nebular library (source: https://akveo.github.io/nebular/)

Chat component of Nebular library

Chat component of Nebular library (source: https://akveo.github.io/nebular/)

Registration form component of Nebular library

Registration form component of Nebular library (source: https://akveo.github.io/nebular/)

Onsen UI

The Onsen UI Library is a popular solution for developing hybrid and mobile apps for Android and iOS by using JavaScript. This library has 8.3K stars on GitHub, it uses bindings and allows you to use different visual styles.

Among other Onsen UI features, there are action sheets, alert dialogs, various buttons, popovers, and many others.

Action sheet component of Onsen UI library

Action sheet component of Onsen UI library (source: https://onsen.io/angular2/)

Alert dialog component of Onsen UI library

Alert dialog component of Onsen UI library (source: https://onsen.io/angular2/)

Forms of Button component of Onsen UI library

Forms of Button component of Onsen UI library (source: https://onsen.io/angular2/)

Popover component of Onsen UI library

Popover component of Onsen UI library (source: https://onsen.io/angular2/)

NG-Zorro

Components from the NG-Zorro library are fully typed in TypeScript. The goal of this project is to provide developers with high-end components for creating Ant Design style user interfaces. This interesting library was created by Chinese developers, it has about 7.4K stars on GitHub.

Its features include menu bars, page headers, sliders, avatars, and many others.

Forms of Page Header component of NG-Zorro library

Forms of Page Header component of NG-Zorro library (source: https://ng.ant.design/docs/introduce/en)

Forms of Slider component of NG-Zorro library

Forms of Slider component of NG-Zorro library (source: https://ng.ant.design/docs/introduce/en)

Forms of Avatar component of NG-Zorro library

Forms of Avatar component of NG-Zorro library (source: https://ng.ant.design/docs/introduce/en)

Vaadin

Visual elements from the Vaadin library are designed to bridge the gap between Angular components and Polymer elements. This library supports Material Design and contains components suitable for mobile and desktop development. It should be noted that its components are stored in separate repositories.

Other notable features are split layouts, buttons, app layouts, upload Forms, and many others.

App Layout component of Vaadin library

App Layout component of Vaadin library (source: https://vaadin.com/)

NG Semantic-UI

The NG Semantic-UI library includes 27 components and has about 1K stars on GitHub. It is based on the popular Semantic-UI front-end solution, presented as components for Angular applications.

It includes such tools as cards, loaders, accordions, menus, and many others.

Card component of NG Semantic-UI library

Card component of NG Semantic-UI library (source: https://ng-semantic.herokuapp.com/#/)

Forms of Menu component of NG Semantic-UI library

Forms of Menu component of NG Semantic-UI library (source: https://ng-semantic.herokuapp.com/#/)

NG2 Charts

The NG2 Charts is a library, which has 1.9K stars on GitHub. It gives the developer Angular directives for creating six types of charts, with the properties based on chart.js. This library can be used to render large datasets and display lists.

It supports line charts, bar charts, doughnut charts, radar charts, pie charts, polar area charts, bubble charts, scatter charts, and others.

Line Chart of NG2 Charts library

Line Chart of NG2 Charts library (source: https://valor-software.com/ng2-charts/#/LineChart)

Bar Chart of NG2 Charts library

Bar Chart of NG2 Charts library (source: https://valor-software.com/ng2-charts/#/LineChart)

Doughnut Chart of NG2 Charts library

Doughnut Chart of NG2 Charts library (source: https://valor-software.com/ng2-charts/#/LineChart)

Radar Chart of NG2 Charts library

Radar Chart of NG2 Charts library (source: https://valor-software.com/ng2-charts/#/LineChart)

Pie Chart of NG2 Charts library

Pie Chart of NG2 Charts library (source: https://valor-software.com/ng2-charts/#/LineChart)

Polar Area Chart of NG2 Charts library

Polar Area Chart of NG2 Charts library (source: https://valor-software.com/ng2-charts/#/LineChart)

Bubble Chart of NG2 Charts library

Bubble Chart of NG2 Charts library (source: https://valor-software.com/ng2-charts/#/LineChart)

Scatter Chart of NG2 Charts library

Scatter Chart of NG2 Charts library (source: https://valor-software.com/ng2-charts/#/LineChart)

Conclusion

Despite the fact that usage of Angular is reduced, according to the latest research of State of JS, many professionals still prefer Angular development thanks to its advantages over other frameworks. So, it is very important for future applications to follow all the design trends and be on the same wavelength as users. With the help of libraries that allow you to implement visual elements fitting your application, you can create an outstanding app that your users will love. 



Source link

r/graphic_design - Client wants opposite logos for regular and b&w versions?
Strategy

Client wants opposite logos for regular and b&w versions? : …


I’m doing an internship for a nonprofit where I’m building their brand. I made a basic graphic to illustrate the context.

I presented my client (5 people) two logo concepts, Logo A and Logo B; they’re exactly the same, except A is outlined while B is filled. They want B1 for their regular logo but prefer A2 for the b&w version.

If I agree to their request, will it be weird/incorrect in the long run? If so, how can I convince them otherwise? I’ve yet to come across logos before where a regular filled version has an outlined b&w version.

r/graphic_design - Client wants opposite logos for regular and b&w versions?



Source link

Looking to identify this logo
Strategy

Looking to identify this logo


Looking to identify this logo

Sorry for the blurry picture, I'm looking to identify this "The Rocket" patch. Looks like it's from the 60s or 70s, I've googled for hours and haven't found anything.

https://preview.redd.it/yw0byw3lk6s61.png?width=1450&format=png&auto=webp&s=0d9efdfd6c564b0dc936f27edc8aa432726a257b

submitted by /u/yodableu
[comments]



Source link

Headless Form Submission With the WordPress REST API
Strategy

Headless Form Submission With the WordPress REST API


If you’re building a WordPress site, you need a good reason not to choose a WordPress form plugin. They are convenient and offer plenty of customizations that would take a ton of effort to build from scratch. They render the HTML, validate the data, store the submissions, and provide integration with third-party services.

But suppose we plan to use WordPress as a headless CMS. In this case, we will be mainly interacting with the REST API (or GraphQL). The front-end part becomes our responsibility entirely, and we can’t rely anymore on form plugins to do the heavy lifting in that area. Now we’re in the driver’s seat when it comes to the front end.

Forms were a solved problem, but now we have to decide what to do about them. We have a couple of options:

  • Do we use our own custom API if we have such a thing? If not, and we don’t want to create one, we can go with a service. There are many good static form providers, and new ones are popping up constantly.
  • Can we keep using the WordPress plugin we already use and leverage its validation, storage, and integration?

The most popular free form plugin, Contact Form 7, has a submission REST API endpoint, and so does the well-known paid plugin, Gravity Forms, among others.

From a technical standpoint, there’s no real difference between submitting the form‘s data to an endpoint provided by a service or a WordPress plugin. So, we have to decide based on different criteria. Price is an obvious one; after that is the availability of the WordPress installation and its REST API. Submitting to an endpoint presupposes that it is always available publicly. That’s already clear when it comes to services because we pay for them to be available. Some setups might limit WordPress access to only editing and build processes. Another thing to consider is where you want to store the data, particularly in a way that adheres to GPDR regulations.

When it comes to features beyond the submission, WordPress form plugins are hard to match. They have their ecosystem, add-ons capable of generating reports, PDFs, readily available integration with newsletters, and payment services. Few services offer this much in a single package.

Even if we use WordPress in the “traditional” way with the front end based on a WordPress theme, using a form plugin’s REST API might make sense in many cases. For example, if we are developing a theme using a utility-first CSS framework, styling the rendered form with fixed markup structured with a BEM-like class convention leaves a sour taste in any developer’s mouth.

The purpose of this article is to present the two WordPress form plugins submission endpoints and show a way to recreate the typical form-related behaviors we got used to getting out of the box. When submitting a form, in general, we have to deal with two main problems. One is the submission of the data itself, and the other is providing meaningful feedback to the user.

So, let’s start there.

The endpoints

Submitting data is the more straightforward part. Both endpoints expect a POST request, and the dynamic part of the URL is the form ID.

Contact Form 7 REST API is available immediately when the plugin is activated, and it looks like this:

https://your-site.tld/wp-json/contact-form-7/v1/contact-forms/<FORM_ID>/feedback

If we’re working with Gravity Forms, the endpoint takes this shape:

https://your-site.tld/wp-json/gf/v2/forms/<FORM_ID>/submissions

The Gravity Forms REST API is disabled by default. To enable it, we have to go to the plugin’s settings, then to the REST API page, and check the “Enable access to the API” option. There is no need to create an API key, as the form submission endpoint does not require it.

The body of the request

Our example form has five fields with the following rules:

  • a required text field
  • a required email field
  • a required date field that accepts dates before October 4, 1957
  • an optional textarea
  • a required checkbox

For Contact Form 7’s request’s body keys, we have to define them with the form-tags syntax:

{
  "somebodys-name": "Marian Kenney",
  "any-email": "[email protected]",
  "before-space-age": "1922-03-11",
  "optional-message": "",
  "fake-terms": "1"
}

Gravity Forms expects the keys in a different format. We have to use an auto-generated, incremental field ID with the input_ prefix. The ID is visible when you are editing the field.

{
  "input_1": "Marian Kenney",
  "input_2": "[email protected]",
  "input_3": "1922-03-11",
  "input_4": "",
  "input_5_1": "1"
}

Submitting the data

We can save ourselves a lot of work if we use the expected keys for the inputs’ name attributes. Otherwise, we have to map the input names to the keys.

Putting everything together, we get an HTML structure like this for Contact Form 7:

<form action="https://your-site.tld/wp-json/contact-form-7/v1/contact-forms/<FORM_ID>/feedback" method="post">
  <label for="somebodys-name">Somebody's name</label>
  <input id="somebodys-name" type="text" name="somebodys-name">
  <!-- Other input elements -->
  <button type="submit">Submit</button>
</form>

In the case of Gravity Forms, we only need to switch the action and the name attributes:

<form action="https://your-site.tld/wp-json/gf/v2/forms/<FORM_ID>/submissions" method="post">
  <label for="input_1">Somebody's name</label>
  <input id="input_1" type="text" name="input_1">
  <!-- Other input elements -->
  <button type="submit">Submit</button>
</form>

Since all the required information is available in the HTML, we are ready to send the request. One way to do this is to use the FormData in combination with the fetch:

const formSubmissionHandler = (event) => {
  event.preventDefault();

  const formElement = event.target,
    { action, method } = formElement,
    body = new FormData(formElement);

  fetch(action, {
    method,
    body
  })
    .then((response) => response.json())
    .then((response) => {
      // Determine if the submission is not valid
      if (isFormSubmissionError(response)) {
        // Handle the case when there are validation errors
      }
      // Handle the happy path
    })
    .catch((error) => {
      // Handle the case when there's a problem with the request
    });
};

const formElement = document.querySelector("form");

formElement.addEventListener("submit", formSubmissionHandler);

We can send the submission with little effort, but the user experience is subpar, to say the least. We owe to users as much guidance as possible to submit the form successfully. At the very least, that means we need to:

  • show a global error or success message,
  • add inline field validation error messages and possible directions, and
  • draw attention to parts that require attention with special classes.

Field validation

On top of using built-in HTML form validation, we can use JavaScript for additional client-side validation and/or take advantage of server-side validation.

When it comes to server-side validation, both Contact Form 7 and Gravity Forms offer that out of the box and return the validation error messages as part of the response. This is convenient as we can control the validation rules from the WordPress admin.

For more complex validation rules, like conditional field validation, it might make sense to rely only on the server-side because keeping the front-end JavaScript validation in sync with the plugins setting can become a maintenance issue.

If we solely go with the server-side validation, the task becomes about parsing the response, extracting the relevant data, and DOM manipulation like inserting elements and toggle class-names.

Response messages

The response when there is a validation error for Contact Form 7 look like this:

{
  "into": "#",
  "status": "validation_failed",
  "message": "One or more fields have an error. Please check and try again.",
  "posted_data_hash": "",
  "invalid_fields": [
    {
      "into": "span.wpcf7-form-control-wrap.somebodys-name",
      "message": "The field is required.",
      "idref": null,
      "error_id": "-ve-somebodys-name"
    },
    {
      "into": "span.wpcf7-form-control-wrap.any-email",
      "message": "The field is required.",
      "idref": null,
      "error_id": "-ve-any-email"
    },
    {
      "into": "span.wpcf7-form-control-wrap.before-space-age",
      "message": "The field is required.",
      "idref": null,
      "error_id": "-ve-before-space-age"
    },
    {
      "into": "span.wpcf7-form-control-wrap.fake-terms",
      "message": "You must accept the terms and conditions before sending your message.",
      "idref": null,
      "error_id": "-ve-fake-terms"
    }
  ]
}

On successful submission, the response looks like this:

{
  "into": "#",
  "status": "mail_sent",
  "message": "Thank you for your message. It has been sent.",
  "posted_data_hash": "d52f9f9de995287195409fe6dcde0c50"
}

Compared to this, Gravity Forms’ validation error response is more compact:

{
  "is_valid": false,
  "validation_messages": {
    "1": "This field is required.",
    "2": "This field is required.",
    "3": "This field is required.",
    "5": "This field is required."
  },
  "page_number": 1,
  "source_page_number": 1
}

But the response on a successful submission is bigger:

{
  "is_valid": true,
  "page_number": 0,
  "source_page_number": 1,
  "confirmation_message": "<div id='gform_confirmation_wrapper_1' class='gform_confirmation_wrapper '><div id='gform_confirmation_message_1' class='gform_confirmation_message_1 gform_confirmation_message'>Thanks for contacting us! We will get in touch with you shortly.</div></div>",
  "confirmation_type": "message"
}

While both contain the information we need, they don‘t follow a common convention, and both have their quirks. For example, the confirmation message in Gravity Forms contains HTML, and the validation message keys don’t have the input_ prefix — the prefix that’s required when we send the request. On the other side, validation errors in Contact Form 7 contain information that is relevant only to their front-end implementation. The field keys are not immediately usable; they have to be extracted.

In a situation like this, instead of working with the response we get, it’s better to come up with a desired, ideal format. Once we have that, we can find ways to transform the original response to what we see fit. If we combine the best of the two scenarios and remove the irrelevant parts for our use case, then we end up with something like this:

{
  "isSuccess": false,
  "message": "One or more fields have an error. Please check and try again.",
  "validationError": {
    "somebodys-name": "This field is required.",
    "any-email": "This field is required.",
    "input_3": "This field is required.",
    "input_5": "This field is required."
  }
}

And on successful submission, we would set isSuccess to true and return an empty validation error object:

{
  "isSuccess": true,
  "message": "Thanks for contacting us! We will get in touch with you shortly.",
  "validationError": {}
}

Now it’s a matter of transforming what we got to what we need. The code to normalize the Contact Forms 7 response is this:

const normalizeContactForm7Response = (response) => {
  // The other possible statuses are different kind of errors
  const isSuccess = response.status === 'mail_sent';
  // A message is provided for all statuses
  const message = response.message;
  const validationError = isSuccess
    ? {}
    : // We transform an array of objects into an object
    Object.fromEntries(
      response.invalid_fields.map((error) => {
        // Extracts the part after "cf7-form-control-wrap"
        const key = /cf7[-a-z]*.(.*)/.exec(error.into)[1];

        return [key, error.message];
      })
    );

  return {
    isSuccess,
    message,
    validationError,
  };
};

The code to normalize the Gravity Forms response winds up being this:

const normalizeGravityFormsResponse = (response) => {
  // Provided already as a boolean in the response
  const isSuccess = response.is_valid;
  const message = isSuccess
    ? // Comes wrapped in a HTML and we likely don't need that
      stripHtml(response.confirmation_message)
    : // No general error message, so we set a fallback
      'There was a problem with your submission.';
  const validationError = isSuccess
    ? {}
    : // We replace the keys with the prefixed version;
      // this way the request and response matches
      Object.fromEntries(
        Object.entries(
            response.validation_messages
        ).map(([key, value]) => [`input_${key}`, value])
      );

  return {
    isSuccess,
    message,
    validationError,
  };
};

We are still missing a way to display the validation errors, success messages, and toggling classes. However, we have a neat way of accessing the data we need, and we removed all of the inconsistencies in the responses with a light abstraction. When put together, it’s ready to be dropped into an existing codebase, or we can continue building on top of it.

There are many ways to tackle the remaining part. What makes sense will depend on the project. For situations where we mainly have to react to state changes, a declarative and reactive library can help a lot. Alpine.js was covered here on CSS-Tricks, and it’s a perfect fit for both demonstrations and using it in production sites. Almost without any modification, we can reuse the code from the previous example. We only need to add the proper directives and in the right places.

Wrapping up

Matching the front-end experience that WordPress form plugins provide can be done with relative ease for straightforward, no-fuss forms — and in a way that is reusable from project to project. We can even accomplish it in a way that allows us to switch the plugin without affecting the front end.

Sure, it takes time and effort to make a multi-page form, previews of the uploaded images, or other advanced features that we’d normally get baked right into a plugin, but the more unique the requirements we have to meet, the more it makes sense to use the submission endpoint as we don’t have to work against the given front-end implementation that tries to solve many problems, but never the particular one we want.

Using WordPress as a headless CMS to access the REST API of a form plugin to hit the submissions endpoints will surely become a more widely used practice. It’s something worth exploring and to keep in mind. In the future, I would not be surprised to see WordPress form plugins designed primarily to work in a headless context like this. I can imagine a plugin where front-end rendering is an add-on feature that’s not an integral part of its core. What consequences that would have, and if it could have commercial success, remains to be explored but is a fascinating space to watch evolve.



Source link