Laptop instance created

Angular Dependency Injection Explained – DZone Web Dev

When I first started programming dependency injection and all the concepts related to it, it seemed a bit abstract to me. I could still create services and inject them, but still there seemed to be a gap in my mind on how this implementation seemed to work. 

In this article, I will first explain what issues DI solves by starting with an example where DI has not been used. I will introduce the benefits of using DI as a coding pattern and how it makes programming easier. 

Code Without DI 

Let’s start with a simple example, we have a class called laptop which has two properties: ram and cpu. It is quite simple to declare it as below:

At this point, we can create a new instance of laptop pretty easily const laptop: Laptop  = new Laptop();  

Now let’s change the ram and cpu classes, supposing that ram needs a size and a unit to be initiated and cpu needs a model. 

As we can see, we need to make changes on laptop constructor, because when creating a new laptop we will need to provide cpu a model, ram a size and ram a unit:

const laptop: Laptop  = new Laptop('Intel', 8, 'GB');

Now as you can see, we used three parameters in this simple example. Imagine if we had to really define a new laptop. The number of parameters we would have to provide, how hard it would be at some point to manage it all—it would be a nightmare! Another drawback is that once we change the parameters needed to initialize cpu, for example, we would need to modify the laptop too. It would be hard to manage such code and it would be nigh impossible to test it. This background will make it easy for us now to understand the necessity of dependency injection. 

DI as a Design Pattern

According to Angular: “Dependency Injection is a coding pattern in which a class receives its dependencies from external sources rather than creating it itself.”  What this definition is basically saying is that since laptop has a dependency on cpu and ram, it will get the dependencies provided to it. This means that the laptop class will be as follows: 

To create a new laptop we will need to first create an instance of cpu and ram, as seen below:  

By providing the dependencies of cpu and ram to laptop, we solved the problem we noticed before. So if we add the generation to cpu, the laptop does not need to change; it will still just get cpu as a parameter:

Now we can see the advantages of providing the dependencies externally. Supposing that laptop will need cpu, ram, keyboard, etc. we will still have issues because we will need to create every dependency manually. Furthermore, in the application itself, it might be necessary to create lots of new instances, and to do so, we would need to copy-paste the code, which is definitely not a good practice.  

DI as a Framework 

At this point, we could create the new laptop instance by providing the dependencies externally ourselves. Now, let’s take it one step further and allow Angular to create these dependencies when needed; we will just provide the default values which can be customized according to our needs. Let’s start with cpu

By using @Inject(), we are telling Angular that cpu has a dependency on the model and whenever the cpu class is used, Angular DI Framework will provide the dependency to us. We will do the same thing for ram by injecting the same parameters used in cpu. Now we need to provide them. For now, I will just provide them inside app module providers:

Now it’s very easy to create a new laptop:

By doing this, we are injecting cpu and ram and then we are creating a new instance of laptop. If we check in the browser, this is the instance we’ve created: 

Laptop instance created

So Angular DI Framework has automatically created a new instance of cpu and ram using the values we provided successfully. 

Let’s consider the case where we have two modules, and when we create a new laptop of each of the modules, it creates two different laptops such as Module A: Intel Laptop and Module B: AMD Laptop

In this case, we need to tell Angular that when each of the modules is initiated, it should provide different values for model, generation, ram size, and unit. These are the values that we inject into cpu and ram

In Module A these will be the providers:

providers: [        
    {provide: 'model', useValue: 'AMD'},         
    {provide: 'generation', useValue: 'Radeon'},        
    {provide: 'size', useValue: 32},        
    {provide: 'unit', useValue: 'GB'}            

And in Module B

 providers: [        
    {provide: 'model', useValue: 'Intel'},         
    {provide: 'generation', useValue: 'i9-9900K'},        
    {provide: 'size', useValue: 16},        
    {provide: 'unit', useValue: 'GB'}       

If we create a laptop inside each of the modules, we will see the following when we load them in the browser: 

Module A Laptop

Module B Laptop

So at this point, we have successfully injected ram and cpu in each of the components and provided different values that Angular DI injected at our A component and B Component.


Dependency Injection design pattern is when you provide the instance dependencies externally. Dependency Injection framework is when you define the dependencies and Angular automatically provides them to components. When speaking of Dependency Injection, it is most often related to services, but in the example that I chose, I hope I made it clear that the concept is quite straightforward: One class has a dependency which can be a string/number/boolean parameter, service or another class and by properly configuring it, Angular Injector will provide that when it is needed. 

In my next article, I will talk more in-depth regarding these services, how hierarchical injector works, and different options to configure dependencies such as useClass, useValue, useExisting and so on.

Source link

Logging vs. Monitoring: Part 1

Logging vs. Monitoring: Part 1

Photo by Luke Chesser on Unsplash

Logging V Monitoring: Part 1


What do you do when your application is down? Better yet: how can you predict when your application may go down? How do you begin an investigation in the most efficient way possible and resolve issues quickly?

Understanding the difference between logging and monitoring is critical, and can make all the difference in your ability to trace issues back to their root cause. If you confuse the two or use one without the other, you’re setting yourself up for long nights and weekends debugging your app.

In this article, we’ll look at how to effectively log and monitor your systems. I’ll tell you about a few good practices that I’ve learned over the years and some interesting metrics that you may want to monitor in your systems. Finally, I’ll show you a small web application that had no monitoring, alerting, or logging. I’ll demonstrate how I fixed the logging and how I’ve implemented monitoring and alerting around those logs.

Everyone has some sort of logging in their applications, even if it’s just writing to a file to review later. By the end of this article, I hope to convince you that logging without monitoring is about as good as no logging at all. Along the way, we can review some best practices for becoming a better logger.

Logging vs. Monitoring

For a while, I conflated logging and monitoring. At least, I thought they were two sides of the same coin. I hadn’t considered how uniquely necessary they each were, and how they supported each other.

Logging tells you what happened, and gives you the raw data to track down the issue.

Monitoring tells you how your application is behaving and can alert you when there are issues.

Can’t Have One Without the Other

Let’s consider a system that has fantastic logging but no monitoring. It’s obvious why this doesn’t work. No matter how good our logs are, I guarantee that nobody actively reads them — especially when our logs get verbose or use formatting like JSON. It is impractical to assume that someone will comb all those logs and look for errors. Maybe when we have a small set of beta users, we can expect them to report every error so we can go back and look at what happened. But what if we have a million users? We can’t expect every one of those users to report each error they encounter.

This is where monitoring comes in. We need to put the systems in place that can do the looking up and coordinating for us. We need a system that will let us know when an error happens and, if it is good enough, why that error occurred.


Let’s begin by talking about monitoring goals and what makes a great monitoring system. First, our system must be able to notify us when it detects errors. Second, we should be able to create alerts based on the needs of our system.

We want to lay out the specific types of events that will determine if our system is performing correctly or not. You may want to be alerted about every error that gets logged. Alternatively, you may be more interested in how fast your system responds in cases. Or, you might be focused on whether your error rates are normal or increasing. You may also be interested in security monitoring and what solution suits your cases. For some additional examples of things to monitor, I’d suggest you check out a great article written by Heroku here.

One final thing to consider is how our monitoring system can point us toward solutions. This will vary greatly depending on your application; still, it is something to consider when picking your tools.

Speaking of tools, here are some of my favorite tools to use when I’m monitoring an application. I’m sure there are more specific ones out there. If you’ve got some tools that you really love, then feel free to leave them in the comments!

Elasticsearch: This is where I store my logs. It lets me set up monitors and alerts in Grafana based on log messages. With Elasticsearch, I can also do full-text searches when I’m trying to find an error’s cause.

Kibana: This lets me easily perform live queries against Elasticsearch to assist in debugging.

Grafana: Here, I create dashboards that provide high-level overviews of my applications. I also use Grafana for its alerting system.

InfluxDB: This time-series database records things like response times, response codes, and any interesting point-in-time data (like success vs. error messages within a batch).

Pushover: When working as a single engineer in a project, Pushover gives me a simple and cheap notification interface. It directly pushes a notification to my phone whenever an alert is triggered. Grafana also has native support for Pushover, so I only have to put in a few API keys and I am ready to go.

PagerDuty: If you are working on a larger project or with a team, then I would suggest PagerDuty. With it, you can schedule specific times when different people (like individuals on your team) receive notifications. You can also create escalation policies in case someone can’t respond quickly enough. Again, Granafa offers native support for PagerDuty.

Heroku: There are other monitoring best practices in this article from Heroku. If you are within the Heroku ecosystem, then you can look at their logging add-ons (most of which include alerting).

Monitoring Example Project

Let’s look at an example project: a Kubernetes-powered web application behind an NGINX proxy, whose log output and response codes/times we want to monitor. If you aren’t interested in the implementation of these tools, feel free to skip to the next section.

Kubernetes automatically writes all logs to stderr and stdout to files on the file system. We can monitor these logs easily, so long as our application correctly writes logs to these streams. As an aside, it is also possible to send your log files directly to Elasticsearch from your application. But for our example project, we want the lowest barrier to entry.

Now that our application is writing logs to the correct locations, let’s set up Elasticsearch, Kibana, and Filebeat to collect the output from the container. Additional and more up-to-date information can be found on the Elastic Cloud Quickstart page.

First, we deploy the Elastic Cloud Operator and RBAC rules.

kubectl apply -f
# Monitor the output from the operator
kubectl -n elastic-system logs -f statefulset.apps/elastic-operator

Next, let’s actually deploy the Elasticsearch cluster.

cat <<EOF | kubectl apply -f -
kind: Elasticsearch
  name: quickstart
  version: 7.10.2
  - name: default
    count: 1
    config: false
# Wait for the cluster to go green
kubectl get elasticsearch

Now that we have an Elasticsearch cluster, let’s deploy Kibana so we can visually query Elasticsearch.

cat <<EOF | kubectl apply -f -
kind: Kibana
  name: quickstart
  version: 7.10.2
  count: 1
    name: quickstart
# Get information about the kibana deployment
kubectl get kibana

Review this page for more information about accessing Kibana.

Finally, we’ll add FileBeat, using this guide, to monitor the Kubernetes logs and ship them to Elasticsearch.

cat <<EOF | kubectl apply -f -
kind: Beat
  name: quickstart
  type: filebeat
  version: 7.10.2
    name: quickstart
    - type: container
      - /var/log/containers/*.log
        dnsPolicy: ClusterFirstWithHostNet
        hostNetwork: true
          runAsUser: 0
        - name: filebeat
          - name: varlogcontainers
            mountPath: /var/log/containers
          - name: varlogpods
            mountPath: /var/log/pods
          - name: varlibdockercontainers
            mountPath: /var/lib/docker/containers
        - name: varlogcontainers
            path: /var/log/containers
        - name: varlogpods
            path: /var/log/pods
        - name: varlibdockercontainers
            path: /var/lib/docker/containers
# Wait for the beat to go green
kubectl get beat

Since our application uses NGINX as a proxy, we can use this wonderful module to write the response codes and times to InfluxDB.

Next, you can follow this guide to get Grafana running in your Kubernetes cluster. After that, set up the two data sources we are using: InfluxDB and Elasticsearch.

Finally, set up whatever alert channel notifiers you wish to use. In my case, I’d use Pushover since I’m just one developer. You may be more interested in something like PagerDuty if you need a fully-featured notification channel.

And there you have it! We’ve got an application — one we can set up dashboards and alerts for using Grafana.

This setup can notify us about all sorts of issues. For example:

  • If we detected any ERROR level logs.
  • If we are receiving too many error response codes from our system.
  • If we are noticing our application responding slower than usual.

We did all this without making many changes to our application; and yet, we now have a lot of tools available to us. We can now instrument our code to record interesting points in time using InfluxDB. For example, if we received a batch of 500 messages and 39 of them were unable to be parsed, we can post a message to InfluxDB telling us that we received 461 valid messages and 39 invalid messages. We can then set up an alert in Grafana to let us know if that ratio of valid to invalid messages spikes.

Essentially, anything that is interesting to code should be interesting to monitor; now, we have the tools necessary to monitor anything interesting in our application.

At this point, I’ll give you a break to digest everything I’ve talked about. In Part Two I’ll be discussing some logging best practices.

Source link

Creating a Twitter Graph Using Slash GraphQL

Creating a Twitter Graph Using Slash GraphQL

Continuing my personal journey into learning more about Dgraph Slash GraphQL, I wanted to create a graph visualization of data stored in a graph database.  Graph visualization (or link analysis) presents data as a network of entities that are classified as nodes and links. To illustrate, consider this very simple network diagram:

While not a perfect example, one can understand the relationships between various services (nodes) and their inner-connectivity (links). This means the X service relies on the Y service to meet the needs of the business. However, what most may not realize is the additional dependency of the Z service, which is easily recognized by this illustration.

For this article, I wanted to build a solution that can dynamically create a graph visualization. Taking this approach, I will be able to simply alter the input source to retrieve an entirely different set of graph data to process and analyze.

The Approach

Instead of mocking up data in a Spring Boot application (as noted in the “Connecting Angular to the Spring Boot and Slash GraphQL Recommendations Engine” and “Tracking the Worst Sci-Fi Movies With Angular and Slash GraphQL” articles), I set a goal to utilize actual data for this article.

From my research, I concluded that the key to a building graph visualization is to have a data set that contains various relationships. Relationships that are not predictable and driven by uncontrolled sources. The first data source that came to mind was Twitter. 

After retrieving data using the Twitter API, the JSON-based data would be loaded into a Dgraph Slash GraphQL database using a somewhat simple Python program and a schema that represents the tweets and users captured by twarc and uploaded into Slash GraphQL. Using the Angular CLI and the ngx-graph graph visualization library, the resulting data will be graphed to visually represent the nodes and links related to the #NeilPeart hashtag. The illustration below summarizes my approach:

Retrieving Data From Twitter Using `twarc`

While I have maintained a semi-active Twitter account (@johnjvester) for almost nine years, I visited the Twitter Developer Portal to create a project called “Dgraph” and an application called “DgraphIntegration”. This step was necessary in order to make API calls against the Twitter service.

The twarc solution (by DocNow) allows for Twitter data to be retrieved from the Twitter API and returned in an easy-to-use, line-oriented, JSON format. The twarc command line tool was written and designed to work with the Python programming language and is easily configured using the twarc configure command and supplying the following credential values from the “DgraphIntegration” application:





With the death of percussionist/lyricist Neil Peart, I performed a search for hashtags that continue to reference this wonderfully-departed soul. The following search command was utilized with twarc:

Below is one example of the thousands of search results that were retrieved via the twarc search:

Preparing Dgraph Slash GraphQL

Starting in September 2020, Dgraph has offered a fully managed backend service, called Slash GraphQL. Along with a hosted graph database instance, there is also a RESTful interface. This functionality, combined with 10,000 free credits for API use, provides the perfect target data store for the #NeilPeart data that I wish to graph.

The first step was to create a new backend instance, which I called tweet-graph:

Next, I created a simple schema for the data I wanted to graph:

The User and Tweet types house all of the data displayed in the JSON example above. The Configuration type will be used by the Angular client to display the search string utilized for the graph data.

Loading Data into Slash GraphQL Using Python

Two Python programs will be utilized to process the JSON data extracted from Twitter using twarc:

  • convert – processes the JSON data to identify any Twitter mentions to another user
  • upload – prepares and performs the upload of JSON data into Slash GraphQL

The core logic for this example lives in the upload program, which executes the following base code:

  1. The gather_tweets_by_user() organizes the Twitter data into the data and users objects.

  2. The upload_to_slash(create_configuration_query(search_string)) stores the search that was performed into Slash GraphQL for use by the Angular client

  3. The for loop processes the data and user objects, uploading each record into Slash GraphQL using upload_to_slash(create_add_tweets_query(users[handle], data[handle]))

Once the program finishes, you can execute the following query from the API Explorer in Slash GraphQL:

Using `ngx-graph` With Angular CLI

The Angular CLI was used to create a simple Angular application. In fact, the base component will be expanded for use by ngx-graph, which was installed using the following command:

npm install @swimlane/ngx-graph --save

Here is the working AppModule for the application:

In order to access data from Slash GraphQL, the following method was added to the GraphQlService in Angular:

Preparing Slash GraphQL to Work With `ngx-graph`

The data in Slash GraphQL must be modified in order to work with the ngx-graph framework. As a result, a ConversionService was added to the Angular client, which performed the following tasks:

The resulting structure contains the following object hierarchy:

While this work could have been completed as part of the load into Slash GraphQL, I wanted to keep the original source data in a format that could be used by other processes and not be proprietary to ngx-graph.

Configuring the Angular View

When the Angular client starts, the following OnInit method will fire, which will show a spinner while the data is processing. Then, it will display the graphical representation of the data once Slash GraphQL has provided the data and the ConversionService has finished processing the data:

On the template side, the following ngx tags were employed:


” data-lang=”text/html”>

The ng-template tags not only provide a richer presentation of the data but also introduce the ability to click on a given node and see the original tweet in a new browser window.

Running the Angular Client

With the Angular client running, you can retrieve the data from the Slash GraphQL by navigating to the application. You will then see a  user experience similar to below: 

It is possible to zoom into this view and even rearrange the nodes to better comprehend the result set.  

Please note: For those who are not fond of the “dagre” layout, you can adjust the ngx-graph.layout property to another graph layout option in ngx-graph.

When the end-user clicks a given node, the original message in Twitter displays in a new browser window:


A fully-functional Twitter Graph was created using the following frameworks and services:

  • Twitter API and Developer Application

  • twarc and custom Python code

  • Dgraph Slash GraphQL

  • Angular CLI and ngx-graph

In a matter of steps, you can analyze Twitter search results graphically, which will likely expose links and nodes that are not apparent through any other data analysis efforts. This is similar to the network example in the introduction of this article that exposed a dependency on the Z service.

If you are interested in the full source code for the Angular application, including the Python import programs referenced above, please visit the following repository on GitLab:

Have a really great day!

Source link

Going From Solid to Knockout Text on Scroll

Going From Solid to Knockout Text on Scroll

Here’s a fun CSS trick to show your friends: a large title that switches from a solid color to knockout text as the background image behind it scrolls into place. And we can do it using plain ol’ HTML and CSS!

This effect is created by rendering two containers with fixed <h1> elements. The first container has a white background with knockout text. The second container has a background image with white text. Then, using some fancy clipping tricks, we hide the first container’s text when the user scrolls beyond its boundaries and vice-versa. This creates the illusion that the text background is changing.

Before we begin, please note that this won’t work on older versions of Internet Explorer. Also, fixed background images can be cumbersome on mobile WebKit browsers. Be sure to think about fallback behavior for these circumstances.

Setting up the HTML

Let’s start by creating our general HTML structure. Inside an outer wrapper, we create two identical containers, each with an <h1> element that is wrapped in a .title_wrapper.


  <!-- First container -->
  <div class="container container_solid">
    <div class="title_wrapper">
      <h1>The Great Outdoors</h1>

  <!-- Second container -->
  <div class="container container_image">
    <div class="title_wrapper">
      <h1>The Great Outdoors</h1>


Notice that each container has both a global .container class and its own identifier class — .container_solid and .container_image, respectively. That way, we can create common base styles and also target each container separately with CSS.

Initial styles

Now, let’s add some CSS to our containers. We want each container to be the full height of the screen. The first container needs a solid white background, which we can do on its .container_solid class. We also want to add a fixed background image to the second container, which we can do on its .container_image class.

.container {
  height: 100vh;

/* First container */
.container_solid {
  background: white;

/* Second container */
.container_image {
  /* Grab a free image from unsplash */
  background-image: url(/path/to/img.jpg);
  background-size: 100vw auto;
  background-position: center;
  background-attachment: fixed;

Next, we can style the <h1> elements a bit. The text inside .container_image can simply be white. However, to get knockout text for the <h1> element inside container_image, we need to apply a background image, then reach for the text-fill-color and background-clip CSS properties to apply the background to the text itself rather than the boundaries of the <h1> element. Notice that the <h1> background has the same sizing as that of our .container_image element. That’s important to make sure things line up.

.container_solid .title_wrapper h1 {
  /* The text background */
  background: url(;
  background-size: 100vw auto;
  background-position: center;
  /* Clip the text, if possible */
  /* Including -webkit` prefix for bester browser support */
  /* */
  -webkit-text-fill-color: transparent;
  text-fill-color: transparent;
  -webkit-background-clip: text;
  background-clip: text;
  /* Fallback text color */
  color: black;

.container_image .title_wrapper h1 {
  color: white;

Now, we want the text fixed to the center of the layout. We’ll add fixed positioning to our global .title_wrapper class and tack it to the vertical center of the window. Then we use text-align to horizontally center our <h1> elements.

.header-text {
  display: block;
  position: fixed; 
  margin: auto;
  width: 100%;
  /* Center the text wrapper vertically */
  top: 50%;
  -webkit-transform: translateY(-50%);
      -ms-transform: translateY(-50%);
          transform: translateY(-50%);

.header-text h1 {
  text-align: center;

At this point, the <h1> in each container should be positioned directly on top of one another and stay fixed to the center of the window as the user scrolls. Here’s the full, organized, code with some shadow added to better see the text positioning.

Clipping the text and containers

This is where things start to get really interesting. We only want a container’s <h1> to be visible when its current scroll position is within the boundaries of its parent container. Normally this can be solved using overflow: hidden; on the parent container. However, with both of our <h1> elements using fixed positioning, they are now positioned relative to the browser window, rather than the parent element. In this case using overflow: hidden; will have no effect.

For the parent containers to hide fixed overflow content, we can use the CSS clip property with absolute positioning. This tells our browser hide any content outside of an element’s boundaries. Let’s replace the styles for our .container class to make sure they don’t display any overflowing elements, even if those elements use fixed positioning.

.container {
  /* Hide fixed overflow contents */
  clip: rect(0, auto, auto, 0);

  /* Does not work if overflow = visible */
  overflow: hidden;

  /* Only works with absolute positioning */
  position: absolute;

  /* Make sure containers are full-width and height */
  height: 100vh;
  left: 0;
  width: 100%;

Now that our containers use absolute positioning, they are removed from the normal flow of content. And, because of that, we need to manually position them relative to their respective parent element.

.container_solid {
  /* ... */

  /* Position this container at the top of its parent element */
  top: 0;

.container_image {
  /* ... */

/* Position the second container below the first container */
  top: 100vh;

At this point, the effect should be taking shape. You can see that scrolling creates an illusion where the knockout text appears to change backgrounds. Really, it is just our clipping mask revealing a different <h1> element depending on which parent container overlaps the center of the screen.

Let’s make Safari happy

If you are using Safari, you may have noticed that its render engine is not refreshing the view properly when scrolling. Add the following code to the .container class to force it to refresh correctly.

.container {
  /* ... */

  /* Safari hack */
  -webkit-mask-image: -webkit-linear-gradient(top, #ffffff 0%,#ffffff 100%);

Here’s the complete code up to this point.

Time to clean house

Let’s make sure our HTML is following accessibility best practices. Users not using assistive tech can’t tell that there are two identical <h1> elements in our document, but those using a screen reader sure will because both headings are announced. Let’s add aria-hidden to our second container to let screen readers know it is purely decorative.

<!-- Second container -->
<div class="container container_image" aria-hidden="true">
  <div class="title_wrapper">
    <h1>The Great Outdoors</h1>

Now, the world is our oyster when it comes to styling. We are free to modify the fonts and font sizes to make the text just how we want. We could even take this further by adding a parallax effect or replacing the background image with a video. But, hey, at that point, just be sure to put a little additional work into the accessibility so those who prefer less motion get the right experience.

That wasn’t so hard, was it?

Source link

Logging Best Practices: Part 2

Logging Best Practices: Part 2

Photo by Denis Agati on Unsplash

Best Practices for Logging

In Part One I discussed why monitoring matters and some ways to implement that. Now let’s talk about some best practices we can implement to make monitoring easier. Let’s start with some best practices for logging — formatting, context, and level.

First, be sure you “log a lot and then log some more.” Log everything you might need in both the happy path and error path since you’ll only be armed with these logs when another errors occur in the future.

Until recently, I didn’t think I needed as many logs in the happy path. Meanwhile, my error path is full of helpful logging messages. Here is one example that just happened to me this week. I had some code that would read messages from a Kafka topic, validate them, and then pass them off to the DB to be persisted. Well, I forgot to actually push the message into the validated-messages array, which resulted in it always being empty. My point here is that everything was part of the happy path, so there weren’t any error logs for me to check. It took me a full day of adding logging and enabling debugging in production to find my mistake (that I forgot to push to the array). If I had messages like “Validating 1000 messages” and “Found 0 valid messages to be persisted,” it would have been immediately obvious that none of my messages were making it through. I could have solved it in an hour if I had “logged a lot and then logged some more.”


This is another logging tip that I had taken for granted until recently. The format of your log messages matters…. and it matters a lot.

People use JSON-formatted logs more and more these days and I’m starting to lean into it myself. After all, there are many benefits to using JSON as your logging format. That said, if you pick a different log format, stick to it across all your systems and services. One of the major JSON-format benefits is that it is super easy to have generic error messages, and then add additional data/context. For example. . .


These messages are harder for humans to read, but easy to group, filter, and read for machines. In the end, we want to push as much processing onto the machine as possible anyway!

Another tip about your actual log message: in many cases, you’ll be looking to find similar events that occurred. Maybe you found an error and you want to know how many times it occurred over the last seven days. If the error message is something like “System X failed because Z > Y” — where X, Y, and Z are all changing between each error message — then it will be difficult to classify those errors as the same.

To solve this, use a general message for the actual log message so you can search by the exact error wording. For example: “This system failed because there are more users than there are slots available.” Within the context of the log message, you can attach all the variables specific to this current failure.

This does require you to have an advanced-enough logging framework to attach context. But if you are using JSON for your log messages, then you could have the “message” field be the same string for every event; any other context would appear as additional fields in the JSON blob. That way, grouping messages is easy, and specific error data is still logged. Although, if you are using a JSON format, then I’d suggest that you have a “message” and a “display.” That way, you get the best of both worlds.


Rarely does a single log message paint the entire picture; including additional context with it will pay off. There is nothing more frustrating than when you get an alert saying “All your base are belong to us” and you have no idea what bases are missing or who “us” is referencing.

Whenever you are writing a log message, imagine receiving it at 1:00am. Include all the relevant information your sleepy self would need to look into the issue as quickly as possible. You may also choose to log a transaction ID as part of your context. We’ll chat about those later.


Always use the correct level when writing your log messages. Ideally, your team will have different uses for the different log levels. Make sure you and your team are logging at the agreed-upon level when writing messages.

Some examples are INFO for general system state and probably happy-path code, ERROR for exceptions and non-happy-path code, WARN for things that might cause errors later or are approaching a limit, DEBUG for everything else. Obviously, these are just how I use some of the log levels. Try and lay out a log-level strategy with your team and stick to it.

Also, ensure that whatever logging aggregator you use allows for filtering by specific log levels or groups of log levels. When you view the state of your system, you probably don’t care about DEBUG level logs and want to just search for everything INFO and above, for example.

Log Storage

In order for your logs to be accessible, you’ll need to store them somewhere. These days, it is unlikely that you’ll have a single log file that represents your entire system. Even if you have a monolithic application, you likely host it on more than one server. As such, you’ll need a system that can aggregate all these log files.

I prefer to store my logs in Elasticsearch, but if you are in another ecosystem like Heroku, then you can use one of the provided logging add-ons. There are even some free ones to get you started.

You may also prefer third-party logging services like Splunk or Datadog to ship your logs and monitor, analyze, and alert from there.


If you have logged all your messages at the correct levels and have used easily groupable log messages, then filtering becomes simple in any system configuration. Writing a query in Elasticsearch will be so much simpler when you’ve planned your log messages with this in mind.

Transaction IDs

Let’s face it: gone are the days when a single service handled the full request path. Only in rare cases or demo projects will your services be completely isolated from other services. Even something as simple as a front-end and a separate backend API can benefit from having transaction IDs. The idea is that you generate a transaction ID (which can be as simple as a UUID) as early as possible in your request path. That transaction ID gets passed through every request and stored with the data in whichever systems store it. This way, when there is an error four of five levels deep in your system, you can trace that request back to when the user first clicked the button. Using transaction IDs makes it easier to bridge the gap between systems. If you see an error in InfluxDB, then you can use the transaction ID to find any related messages in Elasticsearch.

Other Interesting Metrics

Just recording log messages probably won’t provide the whole picture of your system. Here are a few more metrics that may interest you.


Keeping track of how quickly your system processes a batch of messages — or finishes some job — can easily illuminate subtler errors. You may also be able to detect errors or slowness in your downstream systems by using throughput monitoring. Maybe a database is acting slower than usual, or your database switched to an inefficient query plan. Well, throughput monitoring is a great way to detect these types of errors.

Success vs. Error

Of course, no system will ever have a 100% success rate. Maybe you expect your system to return a success error code at least 95% of the time. Logging your response codes will help you gauge if your expected success rates are dropping.

Response Times

The last interesting metric I’ll discuss is response times. Especially when you’ve got a bunch of developers all pushing to a single code base, it is difficult to realize when you’ve impacted the response times of another endpoint. Capturing the overall response time of every request may give you the insight necessary to realize when response times increase. If you catch it early enough, it may not be hard to identify the commit that caused the issue.


In this article, I’ve talked about the differences between logging and monitoring and why they are both necessary in a robust system. I’ve talked about some monitoring practices as well as some monitoring tools I like using. We experimented with a system and learned how to install and set up some monitoring tools for that system. Finally, I talked about some logging best practices that will make your life much easier and how better logging will make your monitoring tools much more useful.

If you have any questions, comments, or suggestions please leave them in the comments below and together we can all implement better monitors and build more reliable systems!

Source link

Lightning Web Components, Events and Lightning Message Servi...

Lightning Web Components, Events and Lightning Message Servi…

This is the sixth article documenting what I’ve learned from a series of 12 Trailhead Live video sessions on Modern App Development on Salesforce and Heroku. In these articles, we’re focusing on how to combine Salesforce with Heroku to build an “eCars” app—a sales and service application for a fictitious electric car company (“Pulsar”) that allows users to customize and buy cars, service techs to view live diagnostic info from the car, and more. In case you missed my last articles, you can find it here: Custom App Experiences With Lightning Web Components – DZone Web Dev

Just as a quick reminder: I’ve been following this Trailhead Live video series to brush up and stay current on the latest app development trends on these platforms that are key for my career and business. I’ll be sharing each step for building the app, what I’ve learned, and my thoughts from each session. These series reviews are both for my own edification as well as for others who might benefit from this content.

The Trailhead Live sessions and schedule can be found here:

The Trailhead Live sessions I’m writing about can also be found at the links below:

Last Time…

In the last session, we went into detail about considerations and tools related to customizing app experiences, from point-and-click methods all the way to fully custom code.  However, we only briefly touched on the featured go-to custom code framework: Lightning Web Components, or LWC. 

This time, we’re taking a deeper look into LWC and also covering how to communicate between components in our eCars app using Events and the Lightning Message Service.

Like Normal Web Components… but With More!

LWC are built on top of regular web components (WC) which almost every web application developer should be familiar with. In a previous article, I commented on how smart it is for Salesforce to move from SFDC-specific frameworks (like Visualforce) to LWC, since this allows most web developers to build on the platform with less of a learning curve. On top of that, web components are modern, interoperable, future-proof, and backwards compatible.  


The idea underpinning LWC is the open-source framework that has a compiler and Lightning-specific properties. This provides what Mohith Shrivastava calls “sugar” on top of a web component.   

The verbose WC syntax then becomes really simple; we can add styling using the Salesforce Lightning Design System (SLDS) as well as Metadata.



For those of you, like me, who may have come from “Visualforce/APEX land” and find LWC somewhat of a fresh concept, remember this: when you are learning LWC, you are learning the standard web components as well. Two birds, one sweet framework.

Another important tidbit for those of us coming from APEX/Visualforce: JavaScript and thus LWC are CaSe sEnsiTive so if you’re used to APEX code being case insensitive, save yourself a lot of debugging and don’t forget to use camelCase or kebab-case where needed.

Building Out the Enhanced eCars Inventory Gallery

Let’s jump into the first activity, where our goal is to build our custom inventory list experience for the eCars app. In the previous article, we created a prototype using a Salesforce Lightning Design System (SLDS) plug-in for Sketch.  We also used standard SLDS base components to arrive at this enhanced layout:


Certainly nicer than a bland list view

We will build in our IDE of choice, Visual Studio Code.  You’ll need a scratch org and Salesforce-specific extensions for VS code to get started. If you are just finding this article now and are not yet set up with a Salesforce developer account, dev hub, and scratch orgs, go ahead and jump to the first article of the series to get those things set up. Once that’s done, you’re up and running and ready to develop.

When building LWC, the component-reference documentation is GOAT (greatest of all time). Salesforce has invested a lot in giving you “pre-baked components that have a lot of functionality,” including providing HTML, CSS, and Javascript code out-of-the-box. These are largely the same components you’d find in SLDS, so the design process and the code implementation are going to flow naturally. One of the components we used for the Sketch design is available in the component reference, of course: the Lightning Card.


An all-you-can-copy-and-paste component buffet

We can proceed by just pulling in the pre-baked code for the Lightning Card component we want to use. At this point, it’s useful to understand the term “slot.” A “slot” is a placeholder in components where you can add your own component or HTML mark-up. Mohith also reviews badges documentation and pulls in a stand-in image. With a refresh to our local dev server (Code Preview), we can see the beginnings of the inventory card.

We can then expand on this single inventory card component and create iterations of each card that will pass data from the Vehicle__c Salesforce custom object records to this new inventory component. We can bind the data properties from Salesforce data to our component using the @api decorator.

LWC adds a lot of utilities for you to create flow control for apps including repeaters and conditionals. You can check them out at where you’ll learn all the fundamentals like templates, data binding, and development lifecycle. 

Design Help for Developers

Developers and designers are usually not the same people, and the skillsets don’t always overlap. As a result, it’s very nice to have libraries and design systems like SLDS and the LWC component reference, since they take care of a lot of the design thinking for us developers. But of course, if you are a developer/designer hybrid unicorn and have the CSS background, you can build onto SLDS with your own designs and create amazing interfaces.

In the documentation, we have a lot at our fingertips. We even have a playground to learn about different ways to size our layouts; we are encouraged to try different options to understand what type of layouts are actually possible. Developers even have the option to build out layouts that are automatically responsive.

I personally love what the SLDS has done: it’s given us utilities so that we don’t have to actually worry about them. For example, if we want to add some padding somewhere, we can look at the classes in the documentation, find the appropriate one, and simply copy-paste the class to our component. Using on-page or in-line CSS should be a rare case and avoided whenever possible.



Back to App Building

Moving forward in the walkthrough, we get our component set up and preview it locally:


Looks like we are almost there!

There are certain properties in Lighting Layouts that we can use to our advantage. In this example, we want multiple rows, which means we get to play with the size element. In VS code, if the LWC extension is installed, hovering over an element shows all its properties. Then we can start typing it in, find what we need, and let the auto-complete save us time.


This will save me lots of Google searches

Events in Lightning Components

We need Events to handle transmitting data in LWC between separate components. That’s because they do not automatically communicate with each other, even if they are organized as parent-child components. (Maybe the child is a broody teenager?) But jokes aside, this is by design and for security reasons. That way, foreign components embedded in the same page cannot “spy” on another component’s data.

The important thing to remember with Events in LWC is that “if you do not have a strategy, you can quickly mess up your architecture without following a consistent design pattern.” The best practice for child-to-parent components is to always use Events. In our eCars inventory gallery, we add a function where clicking one image will mute or “blur” all others on the screen. The blur effect is done in CSS, but handling the event to blur/unblur the appropriate cards requires Events and JavaScript.

We create a JavaScript event (new CustomEvent = ‘cardselect’) and pass the vehicle’s VIN so we can identify which element is being clicked. We dispatch the Event on clicking the card, but how can we have the card component listen for this? We can go into the HTML element for the inventory-card and add an “oncardselect” property calling a function, our handler.


We can then add our handler to our JavaScript with a map function to return the modified element, followed by the @track decorator to ensure we have reactivity when we are interacting with our app.


Lightning cards go blurrrrrr 

Concluding Thoughts

I remember the first time I went through a Visualforce tutorial. I needed to review existing working example code, modify it, break it, and build it back up again. I anticipate I’ll have to do the same with LWC, and also work through some practical exercises to build up experience and confidence with the framework. I would love to hear from those who are familiar with regular web components on how similar this is to them and how quick the transition is from regular WC to LWC.  Luckily, I have a backlog of Visualforce pages from Salesforce orgs that I manage—they could use some conversion to Lightning Web Components. I reckon I’ll get plenty of practice with LWC that way!  There is also a nice sample app on github from the Developer Advocate Team that actually demonstrates how to do this.

In the next article, we’re going to look at automating back-end processes and logic for our eCars app using Flows and APEX.

If you haven’t already joined the official Chatter group for this series, I certainly recommend you do so. That way, you can get the full value of the experience and also pose questions and start discussions with the group. Oftentimes, there are valuable discussions and additional references available there such as the slides from the presentation and links to other resources and references.

About me:  I’m an 11x certified Salesforce professional who’s been running my own Salesforce consultancy for several years. If you’re curious about my backstory on accidentally turning into a developer and even competing on stage on a quiz show at one of the Salesforce conventions, you can read this article I wrote for the Salesforce blog a few years ago.

Thanks to Jason Sun for his kind permission to publish this article.

Source link