Emulator Results

How to Implement Beautiful Charts in Flutter

In this article, we are going to learn how to implement beautiful charts in Flutter. Charts are a great visual way to represent any type of data, be it statistical data, financial data, or stats. We are using charts in a lot of our Flutter app templates, and we are going to describe how we implemented them.

When it comes to data visualization or representation in any field, the first thing that comes to our mind is charts. Charts are an effective and efficient mechanism to display statistical data that will not only make it easier to read the data but also compare and contrast. It helps display data in an informative way that makes it easy for readers to comprehend the overall data.

Now that we know the importance of charts in the statistical world, we should also know that charts are very useful to display complex data in mobile applications as well. Here, we are going to learn how to add charts to a Flutter mobile app.

The Flutter mobile application development framework is booming in the current context and it will surely take the center stage as a state-of-the-art mobile app development tech in near future. In this tutorial, we are going to make use of a charts_flutter package to add different charts to our Flutter app UI. The idea is to install the library and use the different charts widget offered by it. We will also learn how to define the series list value that is to be used by chart widgets to visually represent the data.

So, let’s get started!

Setting Up the Flutter Charts Project

First, we need to create a new Flutter project. For that, make sure that the Flutter SDK and other flutter app development-related requirements are properly installed. If everything is properly set up, then in order to create a project, we can simply run the following command in the desired local directory:

After the project has been set up, we can navigate inside the project directory and execute the following command in the terminal to run the project in either an available emulator or an actual device:

Creating a Home Page UI

Now, we are going to create a home page Scaffold. For that, we need to create a file called home_page.dart. Inside the file, we can use the code from the following code snippet:

Here, we have returned a simple Scaffold widget with an App Bar and a body from the stateless widget class called HomePage.

Now, we need to import this stateless widget class in our main.dart file and assign it to the home option of the MaterialApp widget as shown in the code snippet below:

Now if we re-run the app, we will get the following result in our emulator screen:

Emulator Results

Adding the charts_flutter Plugin

Since we are going to add charts to our app, we are going to use the package called charts_flutter. This package is a Material Design data visualization library written natively in Dart. It provides a wide range of charts for data visualization which is lightweight and easy to configure as well. Now in order to install this library into our flutter project, we need to add the charts_flutter: ^0.8.1 line to our pubspec.yaml file as directed in the code snippet below:

After successful installation, we are ready to use the chart widgets offered by this package.

Bar Charts in Flutter

In this section, we are going to learn how to add a bar chart to our Flutter app. Firstly, we are going to create a model file that defines the attributes of the data to be shown in the bar chart. Here, we are going to name the file bitcoin_price_series.dart. Inside, we are going to define a class called BitcoinPriceSeries that takes in three parameters: month, price, and color. The overall code implementation is shown in the code snippet below:

Now, we need to create a new file called bitcoin_price_chart.dart to define the chart structure. Here, we are going to implement a stateless widget called BitcoinPriceChart that returns the Bar chart with series value. The Series configuration offered by the charts library helps us define each series of data. The exact implementation of how to configure the series value is provided in the code snippet below:

Now that we have the list data that holds the bar chart series data, we can apply it to the UI template. Now, we are going to return a Container widget with Card widget as a child that holds the BarChart widget taking the list and animate boolean value as parameters. The overall implementation is provided in the code snippet below:

Now, we are going to add the Bar chart to our home page. For that, we need to import the BitcoinPriceChart stateless class widget into the home_page.dart file as shown in the code snippet below:

import 'package:flutter/material.dart'; import 'package:chartpost/bitcoin_price_series.dart'; import 'package:chartpost/bitcoin_price_chart.dart'; import 'package:charts_flutter/flutter.dart' as charts;

After import, we can define the list that stores the data based on BitcoinPriceSeries model as shown in the code snippet below:

Lastly, we need to add the BitcoinPriceChart widget to the body of the Scaffold in home page bypassing the required list data as shown in the code snippet below:

Hence, we will get the bar chart on our home screen as demonstrated in the emulator screenshot below:

Bar Chart on Homescreen

Hence, we have successfully added a bar chart to our Flutter app.

Pie Charts in Flutter

Now that we know how to configure the data for the bar chart, we can easily add a pie chart as well using the exact same series list data. The bar chart data series and pie chart data series share the same format of data. Hence, we can simply add a pie chart using the PieChart widget offered by charts library supplying series list and animate boolean parameters as shown in the code snippet below:

Note that, this PieChart widget was directly added to the bitcoin_price_chart.dart file just below the BarChart widget inside the Column widget.

Pie Chart Widget

Hence, we have successfully added a Pie chart to our Flutter app as well.


The main goal of this tutorial was to demonstrate how to implement various charts in Flutter. The availability of the charts_flutter library made it easier to add charts to our Flutter app. The series-based data configuration along with lightweight widgets for a wide variety of widgets makes this library very powerful and efficient as well. Here, we learned how to add a bar chart and a pie chart. We also learned how to configure the series-based data to feed to chart widgets.

Now the challenge is to add other types of charts to the Flutter application. The concept for adding the chart widget is the same but the data configuration might be a bit different.

Source link

Spreadsheet Cover

How to Handle Spreadsheet Uploads for Your Web App

Handle Spreadsheet Import, Mapping, and Validation for Your Web App

When it comes to data, spreadsheets are incredibly useful and versatile. If your web app deals with any type of data  —  from sales pipelines to profit and loss statements, you’ve likely dealt with importing CSV files.

One of the first issues you run into with CSV uploads is the formatting of the data. For example:

  • What if the columns are named differently than what you want?
    ie., A column called Name instead of FullName
  • What if the data is formatted differently?
    ie., A date formatted MM-DD-YYYY instead of YYYY-DD-MM
  • What if some of the data is invalid?
    ie., An invalid phone number 123-456-789

Spreadsheet Cover

Because of these possibilities, you will realize you need some type of column mapping and validation abilities, so you only import valid data that your web app can understand.

For this example, I will be using the new gluestick library, which comes with two parts:

  • gluestick-elements: a set of React components that make it easy to build an intuitive import + validation experience for users.
  • gluestick-api: a Dockerized Python API that handles parsing, validating, and mapping the imported CSV data. It also allows you to send the data directly to a cloud service like AWS S3.

If you’d like, feel free to read the gluestick docs or join their Slack for more information. To give you some reference, the final result is shown below, and an interactive demo is available on CodeSandbox.

Final Mapping Flow

Final Mapping Flow

Without further ado, let’s jump in!

The Backend

Before we begin setting up the front end, let’s get the gluestick-api running on our local machine. Before doing this make sure you have Python and Docker installed and running.

For this example, we’ll use the gluestick CLI to get started, but if you’d like to do it manually you can follow the docs.

Install the CLI

Let’s start by installing the CLI, which is available on PyPi.

$ pip install gluestick

Install the Docker Image

Now we can pull the latest version of the gluestick-api and create the default config. I recommend doing this in a unique directory.

$ mkdir mygluestick-project
$ cd mygluestick-project
$ gluestick install
Created default gluestick-api configuration.
Pulling the gluestick-api Docker image...
Using default tag: latest
latest: Pulling from hotglue/gluestick-api
Digest: sha256:6d1a0fdbd884e252a5e6f7abf8f227366b7a1be4fd2ddae4cbd37fe4f217bbcf
Status: Image is up to date for hotglue/gluestick-api:latest
Latest gluestick-api Docker image pulled.

From here, you can now configure a target for your data such as AWS S3, but we’ll skip that part for now.

Start the API

Now let’s run the API. By default, it starts on port 5000 but you can change the port using the --port=$PORT option:

$ gluestick run
Starting gluestick-api...
[2021-04-07 20:30:22 +0000] [1] [INFO] Starting gunicorn 20.1.0
[2021-04-07 20:30:22 +0000] [1] [INFO] Listening at: (1)
[2021-04-07 20:30:22 +0000] [1] [INFO] Using worker: sync
[2021-04-07 20:30:22 +0000] [9] [INFO] Booting worker with pid: 9

That’s it! Now we can move forward to configuring the frontend.

The Frontend

Now we can configure the gluestick-elements library in our React app.

If you’d like to follow along, the code for this example is available on CodeSandbox

Install the Package

Let’s install the package via npm

npm install --save gluestick-elements

Add the Element

Now we can add the React element to our project! Below is a simple example with the GlueStick component

Test the Element

Now that everything is running we can test out the whole flow! If you need some testing data, you can download a sample Leads.csv (link downloads Leads.csv file).

Final Mapping Flow 2
Final mapping flow

gluestick will do the following:

  1. Parse the input CSV file, and determine the available columns.
  2. Pick the nearest matching column names as a suggested mapping and run any validation.
  3. Show any invalid rows to the user and tell them what percentage of data has valid information.
  4. Preview the final data for the user.
  5. Send the data to its final destination (S3, Google Cloud Storage, etc.).


That’s all there is to it! The next step is customizing the schema of the file you want, and configuring any target you want.

If you’re interested in gluestick, I recommend taking a look at the docs.

I am more than happy to answer any questions below. Thanks for reading!

Source link

Sidharth Jain

ReactJS Vs. AngularJS – DZone Web Dev

The utilization of JavaScript devices is expanding these days; it is overwhelming to pick the suitable innovation needed at a time. Allow us to talk about the two arising advancement technologies utilized these days, i.e., ReactJS vs. AngularJS.


ReactJS is an open-source library of JavaScript that offers a total bundle of lean engineering and segment-based work processes. It is a specialized language utilized for front-end improvement. Worked by Facebook to answer advanced high delivering execution. The open-source nature of ReactJS is a vital benefit that has maneuvered it into a gigantic, energized and exceptionally dynamic local area.

Presently you will believe that Facebook has found React, is it based on something similar? So Facebook’s codebase joins more than 20,000 sections. React is utilized in developing the Facebook website page, yet the web adaptations of Instagram and WhatsApp are completely underlying React.

The businesses wherein React is most utilized are Media, diversion, Retail, Financial innovation, Artificial knowledge.

The most widely recognized inquiry these days about ReactJS is the reason would it be advisable for us to figure out how to ReactJS in 2020? The purpose for it is that ReactJS is renowned among designers throughout the planet. It can support your efficiency, better Code Stability, SEO agreeable. There is a HUGE social class around it. Some remarkable libraries and apparatuses cause working to respond applications essentially more straightforward, and snappier. It has a few benefits that satisfy needs like it revelatory, SEO-accommodating. It is obstinate and utilized by enormous associations. Furthermore, some more…

Why Choose ReactJS?

After the nitty-gritty exploration, we have recorded down the main three motivations to pick ReactJS for improvement, let examine them:

1. Exceptional Profitability

ReactJS makes its own virtual Document Object Model. Where your parts in all actuality live. It manages all of the movements to make in the Document Object Model and updates in the Document Object Model tree as well. This makes it a versatile method to manage to get a good display. Consequently, it discards over the top Document Object Model assignments and reliably makes revives viably.

2. Search Engine Optimization Effective

Web search instruments feel that it is difficult to seek after JavaScript considerable applications even in the wake of having upgrades around there. Along these lines, this is one of the tremendous issues that go with JavaScript frameworks. In any case, ReactJS has pounded this.

3. Segments in ReactJS

PolymerJS and Shadow Document Object Model have made a lot of hums, which are consistently used to make adjustable components, free segments that you can undoubtedly bring into your task.

Disadvantages of Using ReactJS

1. Helpless Documentation

It is another hindrance, which is typical for continually refreshing innovations, React advances refreshing and speeding up so quickly that there is no ideal opportunity to make legitimate documentation. To beat this, engineers compose guidelines all alone with the advancement of new deliveries and devices being used.

2. Rapid Progression

This has an advantage and burden both. In the event of a disservice, since the climate is constantly evolving limitlessly, a portion of the designers are not happy with learning the better approaches for doing things consistently. It very well may be hard for them to receive every one of these progressions with every one of the consistent updates.

3. JSX Syntax SX

It is a JS sentence structure that allows the fashioner to use HTML statements and utilization of HTML label punctuation for subcomponents conveying. It progresses the design of machine-coherent code and enables it to intensify parts in one incorporated time confirmed document.


AngularJS is an open-source front-end structure subject to TypeScript, and it was improved from AngularJS, the JavaScript-based web framework. With AngularJS, you can make applications that utilize the Model View Controller (MVC) building plan, i.e., the data model, presentation information, and control information of the application are segregated.

Individuals at Google made this. The regular inquiry between the designers in 2021 is AngularJS’s future confirmation. The appropriate response is yes; Basically, no innovation will continue to go that long. Regardless, people should be readied and the code should be kept up and become further. It has its benefits, like easy to test, simple to extend, open-source, simple to tweak, straightforward engineering, and some more. By and large, AngularJS is generally utilized in Video real-time applications, User-produced content sites, Review sites, and so forth.

Why Choose AngularJS?

After the nitty-gritty exploration we have recorded down the main three motivations to pick Angularjs for advancement, how about we examine them:

1. Adaptability

The utilization of channels and mandates makes it more adaptable for web application advancement. So channels are planned as independent capacities, which are isolated from your application yet deal with information changes.

2. Testing

AngularJS is connected to Dependency Injection and every one of your controllers relies upon the DI. AngularJS unit testing is finished by infusing mock information into the regulator.

3. UI

AngularJS uses HTML to portray a web application’s UI considering the way that HTML is an informative language and less feeble to see. The overall game is of characteristics you use in your HTML where these attributes describe which regulator will be used for what part. It enhances your web improvement interaction and you essentially describe what you need. In any case, AngularJS manages all of the conditions.

Disadvantages of Using AngularJS

1. Poor for SEO

Notwithstanding the way that the Angular gathering does their generally outrageous to make Angular SEO-obliging, heaps of fashioners cry about vulnerable receptiveness for search crawlers. This is explained by the way single-page applications regularly change substance and Meta labels utilizing JavaScript.

2. Troubles within reverse similarity

Designers can’t switch directly from AngularJS to Angular. There is a whole segment in the Angular documentation that distinguishes all likely ways to deal with managed movement.

3. Powerlessness with MVC

As an engineer, if you are following a customary system, and unconscious of the Model-View-Controller design designs, at that point Angular will eat up a ton of time.

ReactJS vs AngularJS: When to use them?

Exactly when you experience rakish designers in past variations of C #, Java, and exact, then picking an exact construction is the best decision. In case you need to develop the application plan, there is a multifaceted nature, which is low to a moderate level, gigantic extension features are required, and high effectiveness is required, exact is the right choice.

In the meantime, as per us, AngularJS is the correct decision for you if your gathering is proficient in CSS, HTML, and JavaScript. If the application you need to make is an especially revamped express game plan and requires different sections of various, convertible states, at that point ReactJS vs. AngularJS, ReactJS is best for such an improvement arrangement.

The Angular construction is more capable, all-around upheld regarding patrons, and offers incredible help and device sets support by and large for front-end improvement. Reliable updates and sponsorship from Google suggest that the construction isn’t going anywhere and Google is endeavoring to guarantee the current neighborhood will change them from AngularJS to Angular 2+ with the elite.

Meanwhile, React is sufficiently experienced, and there are innumerable commitments from the local area. It prescribes a lightweight method to manage engineers that they start quickly without learning more work. As of now, they react because Angular’s exhibition is comparable.

As far as Development speed and proficiency AngularJS gives a superior advancement experience on account of its CLI, which engages the improvement of workspace and configuration work applications speedier; Building segments and administrations with one-line codes, settling clean coding features of wide and type-contents in-assembled measures.

Then, due to the utilization of outsider libraries, the speed, and productivity of React influence. Subsequently, ReactJS engineers need to choose the right design alongside the apparatuses as well.

ReactJS Vs. AngularJS: Final Comparison

ReactJS utilizes the Real Document Object Model. For the present circumstance, whether or not only one section of the tree has been changed or adjusted, the information construction of the whole tree has been refreshed. Using the Real Document Object Model, Angle finds which parts ought to be changed to recognize the change.

ReactJS uses a Virtual Document Object Model which permits engineers to track and refresh changes without impacting various parts of the tree.

While the Virtual Document Object Model Real Document Object Model is seen as snappier than joint breaking, the current execution of identifying changes in AngularJS makes the two innovations approaches practically identical in speed and execution.

Author Bio

Sidharth Jain

Sidharth Jain, Proud Founder of Graffersid, Web and Mobile App Development Company based in India. Graffersid has a team of designers and dedicated remote developers. Trusted by startups in YC, Harvard, Google Incubation, BluChilli. He understands how to solve problems using technology and contributes his knowledge to the leading blogging sites.

Source link

How to get JavaScript code of a website?

How to get JavaScript code of a website?

How to get JavaScript code of a website?

I tried using source from inspect tools but it gives bundled up JavaScript code using webpack or something which is not understandable. So is there any way I can get not bundled code or debundle the bundle js.


submitted by /u/bhushanw-tf

Source link

MH Poem Generator

Poem Generator Web Application With Keras, React, and Flask


Natural Language Processing (NLP) is an exciting branch of machine learning and artificial intelligence, as it is applied in speech recognition, language translation, human-computer interaction, sentiment analysis, etc. One of the interesting areas is text generation, and of particular interest to me, is poem generation.

In this article, I describe a poem generator web application, which I built using Deep Learning with Keras, Flask, and React. The core algorithm is from TensorFlow available in their notebook. The data it needs is an existing set of poems. For my application, the data are in three text files with:

  1. Poems of Erica Jong.
  2. Poems of Lavanya Nukavarapu.
  3. Poems of Erica Jong and Lavanya Nukavarapu together.


The TensorFlow example notebook has the model building and training code, as well as the prediction code. I took the model building and training code into my notebooks and executed it on Google Colab to generate the models for each of the three datasets.

The neural network code is as follows:

The network starts with a Sequential model, which is used, as in our case, where each layer has exactly one input tensor and one output tensor. The first layer is the Embedding layer that turns ‘positive integers (indexes) into dense vectors of fixed size.’ The second layer is a Bidirectional layer that’s a wrapper over 150 instances of LSTMs, which are components of Recurrent Neural Networks. Next, we have a Dense layer as the output layer, which applies the softmax activation function on the propagating data. The model is compiled using the categorical_crossentropy function to compute loss between labels and prediction, and ‘adam‘ optimizer. Finally, it is trained for 150 epochs by calling the fit method.

To this base code, I added two callbacks:

  1. ModelCheckpoint for saving the model only if its accuracy in the current epoch is higher than that in the previous epoch. So, by the end of the propagation, we have the model with the highest accuracy.
  2. ReduceLROnPlateau for monitoring the loss function and reducing learning rate by a factor of 20% if learning stagnates, that is if no improvement is seen for 1 epoch.


The prediction part of the TensorFlow example is run-time Flask code in my application. I encapsulated the code in a class called PoemGenerator. This class has the following key methods:


The constructor takes as arguments, a string for the seed_data_text, a list of strings called data, which is nothing but the cleaned poem corpus, and a model. These argument values are copied into instance variables of the same name. The instance variable max_sequence_len is set to the maximum length of the n_gram sequences that are generated from each line after converting their text to sequences of numbers and left-padded with zeros.


This method has the main functionality of poem generation. The seed_text is converted to a numeric sequence, left padded with zeros, and passed to the model to predict the next word. If the predicted word and its index are present in the tokenizer, which is an instance variable, the word is accepted and appended to the seed text. Now the seed text with the appended word becomes the new seed text. It is passed to the model to predict the next word, and the process continues 100 times, resulting in a string output.


This method takes the generated string from the previous method and gives it the shape of a poem. It first removes unnecessary stuff like a word having just a backquote or a backslash. Then it removes adjacent duplicate words. In the third step, it takes a random number between 5 and 8 and slices those many words out of the string, and stores them as the first string element in a list. Effectively, this is the first line of the generated poem. This process of slicing random lengths (between 5 and 8) of words from the string is iterated until all the words in the generated string are removed. The poem is now transformed from a string to a list of strings.

Next, there are two clean up steps:

  1. If the last line has fewer than 5 words, it is dropped. This task is repeated until we have the last line that has 5 words or more.
  2. If the last word of the last line has fewer than 4 characters, then that word is dropped.

Finally, the poem is returned as a list of strings.

The code of strToPoem method is given below:


In the UI, the user has to:

  • Enter a set of words in a text field as seed text;
  • Select a poet, and;
  • Click a button (‘Generate Poem’).

MH Poem Generator

I encapsulated the text field, select drop-down, and button as one React component called PoemComponent. The code is in the file Poem.js and is sourced as a Babel typescript. Babel compiles it into browser-compatible JavaScript.

Flask serves public assets from the directory static, so Poem.js is placed in that folder. Since this is a simple screen, I did not use utilities like create-react-app or npm or Node runtime.

PoemComponent’s key functions and functionalities are given below.

The constructor sets the state with two variables: poem_header and poem, both arrays. The render function has:

  1. An h5 label.
  2. An input text field with ID ‘seed_text’ and a placeholder text ‘Enter seed text: 3 to 5 words.’
  3. A select element with ID ‘poet‘, the first option as ‘--  Please chose a poet  --‘ and the names ‘Erica Jong,‘ ‘Lavanya Nukavarapu,’ and ‘Erica+Lavanya‘ as the subsequent options.
  4. A button with the text ‘Generate Poem.’

The button’s onClick event is bound to the component and invokes the function getPoem


This function collects the seed_text and poet’s name by calling document.getElementById and uses them to concatenate an URL. It invokes fetch with this URL having the endpoint ‘/getpoem‘ on the Flask application. After the response is received, the function updates the state by setting the values of poem_header and poem. This triggers the poem_header and poem values to be updated in divs with the IDs ‘generated_poem_header‘ and ‘generated_poem.’

Finally, the last two lines in Poem.js render PoemComponent at the ‘poem_container‘ div in index.html.

Given below are important snippets of PoemComponent code:



The root endpoint (‘/‘) is the index method that just serves index.html from the template folder.
This file has the entire backend run time code. At startup time, three text files, containing the poetry datasets are read into a list and all words are converted to lower case. This data list is one of the arguments passed to the constructor of PoemGenerator.


This function is invoked at the endpoint location ‘/getpoem.’ From the GET request parameters, it grabs the user-entered seed_text and poet name. It uses the seed_text, the correct data list, and model (based on poet name) to instantiate a PoemGenerator object. On this object, it calls the generate_poem method to generate the poem and stores it in the list ‘poem’. It also calls the makeHeader method to create the metadata of the poem which is stored in the list poem_header. Both these lists are returned as JSON to the client browser.

Repository and Deployment

The code of this application is available in my Github repository mh-poem-generator.

I deployed the application on a cloud Ubuntu-18.04 server. Since TensorFlow 2.2.0 is required, I installed conda and used its version of gunicorn to run it as a systemd service. The application is collocated with other Flask and Ruby on Rails applications and served via Nginx.

The systemd configuration is given below:


The Nginx configuration is as follows: 


You can access the application at https://mahboob.xyz/pg


As of now, the generated poems have the shape of poems but don’t make much sense as actual poems. Sometimes a few lines come out well with good figurative expressions, but that’s all. To improve the poem quality, I will have to add additional layers to the neural network, fine-tune the parameters and enrich the poem lines to better sentences, like how MontyLingua does.

Source link

Getting Started With Kafka and Rust (Part 1)

Getting Started With Kafka and Rust (Part 1)

This is a two-part series to help you get started with Rust and Kafka. We will be using the rust-rdkafka crate which itself is based on librdkafka (C library).

In this post, we will cover the Kafka Producer API.

Initial Setup

Make sure you install a Kafka broker — a local setup should suffice. Of course you will need to have Rust installed as well — you will need version 1.45 or above

Before you begin, clone the GitHub repo:

Check the Cargo.toml file:

rdkafka = { version = "0.25", features = ["cmake-build"] }

Note on the cmake-build feature

rust-rdkafka provides a couple of ways to resolve the librdkafka dependency. I chose static linking, wherein librdkafka was compiled. You could opt for dynamic linking to refer to a locally installed version though.

For more, please refer to this link

Ok, let’s start off with the basics.

Simple Producer

Here is a simple producer based on BaseProducer:

The send method to start producing messages – it’s done in tight loop with a thread::sleep in between (not something you would do in production) to make it easier to track/follow the results. The key, value (payload) and the destination Kafka topic is represented in the form of a BaseRecord

You can check the entire code in the file src/1_producer_simple.rs

To Test if the Producer Is Working …

Run the program:

  • simply rename the file src/1_producer_simple.rs to main.rs
  • execute cargo run

You should see this output:

sending message
sending message
sending message

What’s going on? To figure it out — connect to your Kafka topic (I have used rust as the name of the Kafka topic in the above example) using the Kafka CLI consumer (or any other consumer client e.g. kafkacat). You should see the messages flowing in.

For example:

Producer Callback

We are flying blind right now! Unless we explicitly create a consumer to look at our messages, we have no clue whether they are being sent to Kafka. Let’s fix that by implementing a ProducerContext (trait) to hook into the produce event — it’s like a callback.

Start by creating a struct and an empty implementation for the ClientContext trait (this is mandatory).

Now comes the main part where we implement the delivery function in the ProducerContext trait.

We match against the DeliveryResult (which is a Result after all) to account for success (Ok) and failure (Err) scenarios. All we do is simply log the message in both cases, since this is just an example. You could do pretty much anything you wanted to here (don’t go crazy though!)

We’ve ignored DeliveryOpaque which is an associated type of the ProducerContext trait

We need to make sure that we plug in our ProducerContext implementation. We do this by using the create_with_context method (instead of create) and make sure by providing the correct type for BaseProducer as well.

let producer: BaseProducer<ProduceCallbackLogger> = ClientConfig::new().set(....)
.create_with_context(ProduceCallbackLogger {})

How Does the “Callback Get Called”?

Ok, we have the implementation, but we need a way to trigger it! One of the ways is to call flush on the producer. So, we could write our producer as such:

  • add producer.flush(Duration::from_secs(3));, and
  • comment the sleep (just for now)

Hold On, We Can Do Better!

The send method is non-blocking (by default) but by calling flush after each send, we have now converted this into a synchronous invocation – not recommended from a performance perspective.

We can improve the situation by using a ThreadedProducer. It takes care of invoking the poll method in a background thread to ensure that the delivery callback notifications are delivered. Doing this is very simple — just change the type from BaseProducer to ThreadedProducer!

# before: BaseProducer<ProduceCallbackLogger>
# after: ThreadedProducer<ProduceCallbackLogger>

Also, we don’t need the call to flush anymore.

//println!("flushed message");

The code is available in src/2_threaded_producer.rs

Run the Program Again

  • Rename the file src/2_threaded_producer.rs to main.rs and
  • execute cargo run


sending message
sending message
produced message with key key-1 in offset 6 of partition 2
produced message with key key-2 in offset 3 of partition 0
sending message
produced message with key key-3 in offset 7 of partition 

As expected, you should be able to see the producer event callback, denoting that the messages were indeed sent to the Kafka topic. Of course, you can connect to the topic directly and double-check, just like before:

To try a failure scenario, try using an incorrect topic name and notice how the Err variant of the delivery implementation gets invoked.

Sending JSON Messages

So far, we were just sending Strings as key and values. JSON is a commonly used message format, let’s see how to use that.

Assume we want to send User info which will be represented using this struct:

We can then use serde_json library to serialize this as JSON. All we need is to use the custom derives in serdeDeserialize and Serialize

Change the producer loop:

  • Create a User instance
  • Serialize it to a JSON string using to_string_pretty
  • Include that in the payload

you can also use to_vec (instead of to_string()) to convert it into a Vec of bytes (Vec<u8>)

To Run the Program…

  • Rename the file src/3_JSON_payload.rs to main.rs, and
  • execute cargo run

Consume from the topic:

You should see messages with a String key (e.g. user-34) and JSON value:

Is There a Better Way?

Yes! If you are used to the declarative serialization/de-serialization approach in the Kafka Java client (and probably others as well), you may not like this “explicit” approach. Just to put things in perspective, this is how you’d do it in Java:

Notice that you simply configure the Producer to use KafkaJsonSchemaSerializer and the User class is serialized to JSON

rust-rdkafka provides something similar with the ToBytes trait. Here is what it looks like:

Self-explanatory, right? There are existing implementations for String, Vec<u8> etc. So you can use these types as key or value without any additional work – this is exactly what we just did. But the problem is the way we did it was “explicit” i.e. we converted the User struct into a JSON String and passed it on.

What if we could implement ToBytes for User?

You will see a compiler error:

cannot return value referencing local variable `b`
returns a value referencing data owned by the current function

For additional background, please refer to this [GitHub issue] (https://github.com/fede1024/rust-rdkafka/issues/128). I would happy to see an example other which can work with ToBytes – please drop in a note if you’ve inputs on this!

TL;DR is that it’s best to stick to the “explicit” way of doing things unless you have a ToBytes implementation that “does not involve an allocation and cannot fail”.

Wrap Up

That’s it for the first part! Part 2 will cover topics around Kafka consumers.

Source link

r/web_design - Too many links in my header ? (or, how to manage a situation with many links in the header)

Too many links in my header ? (or, how to manage a situation…

Hi everyone,

I am encountering a web design / UX / UI problem on my website regarding the menu.

Here is how it looks like :

r/web_design - Too many links in my header ? (or, how to manage a situation with many links in the header)

As you can see, the menu already includes a good number of items :

  • Home page

  • Blog (with the categories appearing on hover)

  • Guides (links to famous articles which people actually really need)

  • Formation (= Training in French, which is a product I’m selling)

  • Contact

  • Connexion = Log In — which is replaced by the user’s first name with a fontawesome icon when he’s logged in

  • Search icon

I am willing to widen the scope of things I am offering, which would mean adding at least two links :

As you can see, the space between the logo and the menu is going to get very quickly full, and as in terms of responsivity it means that I would have to display a burger menu very quickly if the window is just slightly smaller, which is not something I like (people aren’t always using full-screen windows when browsing on their computer).

Consequently, I am a bit at a loss regarding what I should do :

  • Should I move the menu to a lower line (but that would push back more page content below the fold) ?

  • I don’t really want to group menu items together as each would be very different, and I hardly see what can of grouping would make sense.

So I came here to ask for your ideas and point of view of the situation 🙂

Thanks a lot by advance,


Source link

C#: Pitfalls in String Pool

C#: Pitfalls in String Pool

As software developers, we always want our software to work properly. We’ll do everything to improve the software quality. To find the best solution, we are ready to use parallelizing or applying any various optimization techniques. One of these optimization techniques is the so-called string interning. It allows users to reduce memory usage. It also makes string comparison faster. However, everything is good in moderation. Interning at every turn is not worth it. Further, I’ll show you how not to slip up with creating a hidden bottleneck in the form of the String.Intern method for your application.

In case you’ve forgotten, let me remind you that string is a reference type in C#. Therefore, the string variable itself is just a reference that lies on the stack and stores an address. The address points to an instance of the String class located on the heap.

There are several ways to calculate how many bytes a string object takes on the heap: the version by John Skeet and the version by Timur Guev (the last article is in Russian). In the picture above, I used the second option. Even if this formula is not 100 % true, we can still estimate the size of string objects. For example, about 4.7 million lines (each is 100 characters long) are enough to take up 1 GB of RAM. Let’s say there’s a large number of duplicates among the strings in a program. So, it’s just worth using the interning functionality built into the framework. Now, why don’t we briefly recap what is string interning?

String Interning

The idea of string interning is to store only one instance of the String type in memory for identical strings. When running an app, the virtual machine creates an internal hash table, called the interning table (sometimes it is called String Pool). This table stores references to each unique string literal declared in the program. In addition, using the two methods described below, we can get and add references to string objects to this table by ourselves. If an application contains numerous strings (which are often identical), it makes no sense to create a new instance of the String class every time. Instead, you can simply refer to an instance of the String type that has already been created on the heap. To get a reference to it, access the interning table. The virtual machine itself interns all string literals in the code (to find more about interning tricks, check this article). We may choose one of two methods: String.Intern and String.IsInterned.

The first one takes a string as input. If there’s an identical string in the interning table, it returns a reference to an object of the String type that already exists on the heap. If there’s no such string in the table, the reference to this string object is added to the interning table. Then, it is returned from the method. The IsInterned method also accepts a string as input and returns a reference from the interning table to an existing object. If there’s no such object, null is returned (everyone knows about the non-intuitive return value of this method).

Using interning, we reduce the number of new string objects by working with existing ones through references obtained via the Intern method. Thus, we do not create a large number of new objects. So, we save memory and improve program performance. After all, many string objects, references to which quickly disappear from the stack, can lead to frequent garbage collection. It will negatively affect the overall program performance. Interned strings won’t disappear up to the end of the process, even if the references to these objects are no longer in the program. This thing is worth paying attention to. Using interning to reduce memory consumption can produce the opposite effect.

Interning strings can boost performance when comparing these very strings. Let’s take a look at the implementation of the String.Equals method:

Before calling the EqualsHelper method, where a character-by-character comparison of strings is performed, the Object.ReferenceEquals method checks for the equality of references. If the strings are interned, the Object.ReferenceEquals method returns true when the strings are equal (without comparing the strings themselves character-by-character). Of course, if the references are not equal, then the EqualsHelper method will be called, and the subsequent character-by-character comparison will occur. After all, the Equals method does not know that we are working with interned strings. Also, if the ReferenceEquals method returns false, we know that the compared strings are different.

If you are sure that the input strings are interned at a specific place in the program, then you can compare them using the Object.ReferenceEquals method. However, it’s not the greatest approach. There is always a chance that the code will change in the future. Also, it may be reused in another part of the program. So, non-interned lines can get into it. In this case, when comparing two identical non-interned strings via the ReferenceEquals method, we will assume that they are not identical.

Interning strings for later comparison seems justified only if you plan to compare interned strings quite often. Remember that interning an entire set of strings also takes some time. Therefore, you shouldn’t perform it to compare several instances of strings once.

Well, we revised what string interning is. Now, let’s move on to the problem I’ve faced.

Briefly on How it Started

In our bug tracker, there was a task created long ago. It required some research on how parallelizing the C++ code analysis can save analysis time. It would be great if the PVS-Studio analyzer worked in parallel on several machines when analyzing a single project. I chose IncrediBuild as the software that allows such parallelization. IncrediBuild allows you to run different processes in parallel on machines located on the same network. For example, you can parallelize source files compiling on different company machines (or in a cloud). Thus, we save time on the building process. Game developers often use this software.

Well, I started working on this task. At first, I selected a project and analyzed it with PVS-Studio on my machine. Then, I ran the analysis using IncrediBuild, parallelizing the analyzer processes on the company’s machines. At the end, I summed up the results of such parallelization. So, having positive results, we’ll offer our clients such solutions to speed up the analysis.

I chose the Unreal Tournament project. We managed to persuade the programmers to install IncrediBuild on their machines. As a result, we had the combined cluster with about 145 cores.

I analyzed the Unreal Tournament project using the compilation monitoring system in PVS-Studio. So, I worked as follows: I ran the CLMonitor.exe program in monitor mode and performed a full build of Unreal Tournament in Visual Studio. Then, after building process, I ran CLMonitor.exe again, but in the analysis launch mode. Depending on the value specified in the PVS-Studio settings for the ThreadCount parameter, CLMonitor.exe simultaneously runs the corresponding number of PVS-Studio.exe child processes at the same time. These processes are engaged in the analysis of each individual source C++ file. One PVS-Studio.exe child process analyzes one source file. After the analysis, it passes the results back to CLMonitor.exe.

Everything is easy: in the PVS-Studio settings, I set the ThreadCount parameter equal to the number of available cores (145). I run the analysis getting ready for 145 processes of PVS-Studio.exe executed in parallel on remote machines. IncrediBuild has Build Monitor, a user-friendly parallelization monitoring system. Using it, you can observe the processes running on remote machines. The same I observed in the process of analysis:

It seemed that nothing could be easier. Relax and watch the analysis process. Then simply record its duration with IncrediBuild and without. However, in practice, it turned out to be a little bit complicated…

The Problem, Its Location, and Solution

During the analysis, I could switch to other tasks. I also could just meditate looking at PVS-Studio.exe running in the Build Monitor window. As the analysis with IncrediBuild ended, I compared its duration with the results of the one without IncrediBuild. The difference was significant. However, the overall result could have been better. It was 182 minutes on one machine with 8 threads and 50 minutes using IncrediBuild with 145 threads. It turned out that the number of threads increased by 18 times. Meanwhile, the analysis time decreased by only 3.5 times. Finally, I glimpsed the result in the Build Monitor window. Scrolling through the report, I noticed something weird. That’s what I saw on the chart:

I noticed that PVS-Studio.exe executed and completed successfully. But then for some reason, the process paused before starting the next one. It happened again and again. Pause after pause. These downtimes led to a noticeable delay and did their bit to prolong the analysis time. At first, I blamed IncrediBuild. Probably it performs some kind of internal synchronization and slows down the launch.

I shared the results with my senior colleague. He didn’t jump to conclusions. He suggested looking at what’s going on inside our CLMonitor.exe app right when downtime appears on the chart. I ran the analysis again. Then, I noticed the first obvious “failure” on the chart. I connected to the CLMonitor.exe process via Visual Studio debugger and paused it. Opening the Threads, my colleague and I saw about 145 suspended threads. Reviewing the places in the code where the execution paused, we saw code lines with similar content:

What do these lines have in common? Each of them uses the String.Intern method. And it seems justified. Because these are the places where CLMonitor.exe handles data from PVS-Studio.exe processes. Data is written to objects of the ErrorInfo type, which encapsulates information about a potential error found by the analyzer. Also, we internalize quite reasonable things, namely paths to source files. One source file may contain many errors, so it doesn’t make sense for ErrorInfo objects to contain different string objects with the same content. It’s fair enough to just refer to a single object from the heap.

Without a second thought, I realized that string interning had been applied at the wrong moment. So, here’s the situation we observed in the debugger. For some reason, 145 threads were hanging on executing the String.Intern method. Meanwhile, the custom task scheduler LimitedConcurrencyLevelTaskScheduler inside CLMonitor.exe couldn’t start a new thread that would later start a new PVS-Studio.exe process. Then, IncrediBuild would have already run this process on the remote machine. After all, from the scheduler’s point of view, the thread has not yet completed its execution. It performs the transformation of the received data from PVS-Studio.exe in ErrorInfo, followed by string interning. The completion of the PVS-Studio.exe process doesn’t mean anything to the thread. The remote machines are idle. The thread is still active. Also, we set the limit of 145 threads, which does not allow the scheduler to start a new one.

A larger value for the ThreadCount parameter would not solve the problem. It would only increase the queue of threads hanging on the execution of the String.Intern method.

We did not want to remove interning at all. It would increase the amount of RAM consumed by CLMonitor.exe. Eventually, we found a fairly simple and elegant solution. We decided to move interning from the thread that runs PVS-Studio.exe to a slightly later place of code execution (in the thread that directly generates the error report).

As my colleague said, we managed to make a very accurate edit of just two lines. Thus, we solved the problem with idle remote machines. So, we ran the analysis again. There were no significant time intervals between PVS-Studio.exe launches. The analysis’ time decreased from 50 minutes to 26, that is, almost twice. Now, let’s take a look at the overall result that we got using IncrediBuild and 145 available cores. The total analysis time decreased by 7 times. It’s far better than by 3.5 times.

String.Intern – Why Is It so Slow? The CoreCLR code Review

It’s worth noting that once we saw the threads hanging at the places where we call the String.Intern method, we almost instantly thought that under the hood this method has a critical section with some kind of lock. Since each thread can write to the interning table, there must be some synchronization mechanism inside the String.Intern method. It prevents several threads from overwriting each other’s data. To confirm my assumptions, we decided to look at the implementation of the String.Intern method on the reference source. We noticed that inside our interning method there had been a call to Thread.GetDomain().GetOrInternString(str) method. Well, take a look at its implementation:

Now, it’s getting more interesting. This method is imported from some other build. Which one? Since the CLR VM itself does the strings interning, my colleague guided me directly to the .NET runtime repository. After downloading the repository, we went to the CoreCLR solution. We opened it and viewed the entire solution. There we found the GetOrInternString method with the appropriate signature:

So, we saw a call to the GetInternedString method. In the body of this method, we noticed the following code:

The execution thread gets into the else branch only if the method that searches for a reference to the String object (the GetValue method) in the interning table returns false. Let’s move on to the code in the else branch. Here we are interested in the line where an object of the CrstHolder type named gch is created. Now, we turn to the CrstHolder constructor and see the following code:

We notice the call to the AcquireLock method. It’s getting better. Here’s the code of the AcquireLock method:

In fact, that’s the entry point to the critical section – the call to the Enter method. After I’d read the comment “Acquire the lock”, I had no doubts that this method deals with locking. I didn’t see much point in diving further into the CoreCLR code. So, we were right. When a new entry is entered into the interning table, the thread enters the critical section, forcing all other threads to wait for the lock to release. Just before calling the m_StringToEntryHashTable->InsertValue method, the object of the CrstHolder type comes out, and therefore the critical section appears.

The lock disappears immediately after we exit the else branch. In this case, the destructor which calls the ReleaseLock method is called for the gch object:

When there are few threads, the downtime can be small. But when their number increases, for example to 145 (as happened with IncrediBuild), each thread that tries to add a new entry to the internment table temporarily blocks the other 144 threads that also try to add a new entry to it. The results of these locks we observed in the Build Monitor window.


I hope that this case will help you to apply string interning more carefully and thoughtfully, especially in multithreaded code. After all, these locks, adding new records to the internment table, may become a bottleneck, as in our case. It’s great that we were able to find out the truth and solve the detected problem. That made the analyzer work faster.

Thank you for reading.

Source link


Can’t access objects pushed into an array?

Can't access objects pushed into an array?

Hey guys im having trouble accessing these objects after i've pushed them into an array. How can I get access to the 1st province object? homes[0][0].province gives me undefined. Any help would be much appreciated!


submitted by /u/podkolzin

Source link

r/graphic_design - How to keep consistent spacing of elements within boxes?

How to keep consistent spacing of elements within boxes? : g…

I am learning to use Affinity Publishing and I need to align text within these boxes. The problem I have is I don’t know how to always keep the text box the same distance from the top and left sides of the box.

Any ideas or tips? In the past, I would manually draw out a small line and lay that out between elements to keep them consistent. But, I feel there must be an easier way.

I also don’t know how to properly ask this question, is there a term for what I’m looking for?

r/graphic_design - How to keep consistent spacing of elements within boxes?

The text here are not the same distance from the box they are contained in. How do I make them consistently spaced?

Source link