How to Reorganize App Development to Stay Afloat in COVID-19
Strategy

How to Reorganize App Development to Stay Afloat in COVID-19


The beginning of 2020 put the world in a hard situation. The coronavirus outbreak and its distribution has affected not only particular countries but the global economy on the whole. SMBs and enterprises worldwide are experiencing the catastrophic impact of the COVID-19 pandemic. 

To cope with unexpected difficulties, companies are compressing their budgets and searching for the ways to reduce their costs. They are striving to optimize a number of processes to cut app development expenses and save efforts since they have no extra money to pay for outsourced development services.

What exactly can a company do to minimize app development resources being under heavy economic pressure? Can anything in their traditional workflows be transformed?

We’ve defined the key practical tips that won’t allow your business to fall victim to COVID-19. 

Reorganization of the Work Approach

The main idea here is to organize remote work for your employees to keep them safe. Of course, if you and your staff are used to working from an office, it may be hard to adopt the new conditions. For some time, your team members will have to admit they won’t be able to come to each other’s desk to check how the work is going, or just gather at the watercooler for a small chat. However, health should be of the highest priority, especially in the pandemic times. So make sure to pack your employees with everything they need to work from home comfortably and relieve them from the necessity to put their health at risk on the way to work. 

And yes, managing a remote development team may be challenging, especially if you lack the relevant experience. There is a variety of articles with tips and tricks on managing remote development teams effectively. Change your physical watercooler with a “virtual watercooler” Slack channel, for example, so that your team will still have a place where they can talk during lunchtime, or relax chatting after a productive working day.

Tools for making remote work effective (Google Drive, Airtable, Trello, Slack, Zoom, etc.) 

Automating Remote Work

Work automation has a range of advantages. It relieves you from the need to hire additional people to conduct mundane tasks, and lets your employees focus on the most critical work without being distracted by the things of less importance. 

In the case of a remote work approach, the use of work automation software is a necessity. It allows you to minimize the need to micromanage your team. With work automation tools, you will be able to monitor the working progress of each team member remotely and see their achievements without constantly controlling them.

Pay attention to such tools as:

  • Slack, Skype, Microsoft Teams – to ensure instant communication between your team members.

  • Zoom, Google Hangouts, Skype (again) – for video conferencing.

  • Trello, Monday.com, Asana – for effective task management.

  • Airtable, Notion, Google Drive – for keeping everyone on the same page due to the use of one database.

Automating the Entire App Development Cycle

The traditional app development approach loses its attractiveness when time becomes crucial. Within the classical development process, different silos do their work – designing, coding, testing, deploying – and nobody sees the whole picture. 

Constant communication ensured across the team is not everything that is needed to boost the app development process. Automation tools cannot be ignored as well. 

Let’s see how, where, and what tools exactly to utilize within the app development cycle to accelerate the entire process, and make it run more smoothly.

Using No-Code Development Tools

Within the whole app development process, there’re a lot of types of software that can be helpful, for example, Sketch and Figma speed up design, HubSpot CRM, and Salesforce are powerful CRMs, Zapier and Parabola are great workflow automation tools. 

But what about app development itself? When you’re under economic pressure and experience the lack of time, human and technology resources, no-code app development software is a silver bullet. With such visual development solutions as Webflow, Bubble, Wavemaker, UI Bakery, preparing the back-end, writing the front-end code, and integrating the back-end with the front-end is conducted visually, and thus simpler and faster.

No-code app development software relieves you from the necessity to hire front-end and back-end developers. One citizen developer who knows how to use visual development tools can create a solution you need quickly and effectively in several days or weeks. 

Moreover, a lot of no-code or low-code app building platforms provide you with a variety of free templates and dashboards you can use to avoid building apps from scratch and save app development costs. 

Using Test Automation Tools

A testing stage is not the one to skip within the development cycle, and it’s surely the one that can be automated, at least partially. Such tools as Selenium, Appium, Katalon Studio are designed specifically to conduct unit, functional, integration tests automatically, and promptly notify if any bugs are found. 

Thus, the tests cover the largest possible scope of your app functionality, there are much fewer human errors and post-release bugs occurring in the manual testing process, and the release speed is increased.

Using Deployment Automation Tools

There is a great variety of well-known solutions like Jenkins, Bamboo, TeamCity, etc. that allow for continuous app deployment, testing, and releasing. However, to use these tools to the full extent requires particular knowledge, coding skills, and experts (DevOps experts in most cases).

Although, there are visual development alternatives (Wavemaker, UI Bakery) that allow you to not only create a front-end without coding, but also connect data and back-end to it, and deploy your app right within the platform. So there’s no need to find and configure a separate solution for it.

To Sum Up

To survive in the “global economic storms” like the current one caused by COVID-19, companies have to keep up with the dynamic changes these storms bring. The remote work approach is becoming the norm, and organizations have to get used to these new conditions. Automation is not optional anymore but a necessity. The demand for effective app development solutions allowing for creating apps faster and cheaper is reaching stratospheric level. 

Luckily, there are a sufficient number of out-of-the-box app development software you can choose from and use to speed up app development, designing, testing, deployment. 

Let it become the choice that will help you evolve your business, and stay afloat in the tough times. 



Source link

A menu drawer slides in from the left side of the screen
Strategy

A “new direction” in the struggle against rightward scrollin…


A menu drawer slides in from the left side of the screenThe other day I was building a responsive website with a navigation menu that slides into view from the left when you click the menu button.

I was about to head off to the pub, but then I learned the menu was supposed to slide in from the right side instead.

No big deal. With a few lines of CSS I can set the default position off the right edge…

An element positioned off-screen to the right causes scrollingBut oh no! An element off the right edge of the screen will cause horizontal scrolling. This is not cool.

My first thought was to reach for trusty old overflow-x:hidden and be on my way. But there’s a few common situations where this won’t work:

  • Some of the many beloved clearfix solutions for containing floats use overflow:auto;
  • If any of the child elements are position:sticky; they will stop sticking if their parent overflow value is anything other than visible. This is explained in this Stackoverflow answer.
Darn. Now what? I wish I could force the right side of the screen to behave like the left side. Well, it turns out we can! The origin of the scrollbar can be reversed with the lesser-known CSS direction property, which is used to set the direction that text content flows.
body,
html {
direction: rtl;
}
body * {
direction: ltr;
}

The first rule switches the text direction of the root node and the body to right-to-left. This also means the origin of the scrollbar starts from the right, expanding to the left. These are the elements that were previously being overflowed by that right-sided menu. But now they no longer care about elements hidden “behind” them on the right side.

Setting direction to rtl will also reverse the default alignment of text, and English looks funny flowing from right to left. So, the next rule resets all children of the body element back to the default left-to-right text direction.

I have not used this hack in production yet, nor have I done extensive cross-browser testing yet. If you have good or bad experiences with this technique I’d love to hear about it.



Source link

classic hello world example
Strategy

Busting the Myths About Node.js for Enterprises


Node.js is a JavaScript runtime built over Google Chrome V8. It’s great since it can handle connections concurrently. Just like this ‘hello world’ classic example: 

classic hello world example

Node.js is open-source, but it’s perfect for enterprises too. Take it from Netflix, eBay, PayPal, Groupon, even NASA who use Node.js on a daily basis. 

This article will bust the myths about Node.js and enterprises, keep going.

Myth 1. Node.js Isn’t Secure

To say the truth, Node.js is one of the most secure environments in the world. 

Recently NPM rolled out an enterprise version of its package manager (also known as npmE). The package manager for enterprises lets you run NPM’s infrastructure behind the firewall. It’s like a gatekeeper, letting you filter out all the unwanted packages. Therefore, large companies shouldn’t be concerned about keeping data in the cloud. 

With npmE you can bring all your development under one roof. npm Enterprise has a nice infrastructure. Businesses have their own private registry with advanced security features. They can control access to code, quickly detect vulnerabilities and replace the faulty code.

In addition, npmE will notify you about any vulnerable packages early on, during ‘npm install’ phase. This ensures that the faulty packages don’t enter CI/CD pipeline.

In the end, users can always report security bugs in Node.js using Hacker.one. The security experts will then send you guidelines on how to proceed within 48 hours.

Myth 2. Node.js Is Slow 

Node.js is pretty fast without any exclusive thread hacks. It handles multiple connections at the same time thanks to its single-threaded, event-driven architecture. 

Meanwhile, many web platforms create a new thread whenever a request is made. This uses up your RAM resulting in lower speed. On the other hand, Node.js makes use of the event loop and the callbacks for I/O operations. 

Roughly, this model can be compared to the restaurant. The waiter takes your order, gives it back to the chef and proceeds taking orders from other customers. So he doesn’t just wait until the chef cooks your food; he continues to handle other customers’ requests. This is exactly how Node.js works – it can handle thousands of connections at the same time, using resources more effectively. 

This is much faster than the typical synchronous and blocking systems. They would create a new thread every time a new request comes in. So, if there are lots of requests, eventually you can run out of threads. These new requests will have to wait, meanwhile your threads will be just doing nothing. So it isn’t as effective as it may seem, right?

Myth 3. JavaScript Is Inferior Programming Language

JavaScript is one of the most hated programming languages on the web. Before ES6, Javascript code wasn’t workable for enterprises. However, the ES6/ ES2015 standard for JavaScript changed the game. It brings features like enhanced object literals, Rest and Spread parameters, multi-line strings, Promises and more. 

Consequently, JavaScript is now ‘more usable’ than ever before. The proof? The number of enterprises using it speaks for itself: LinkedIn, The New York Times and more.

Myth 4. Node.js Isn’t Convenient for Java .NET Developers

Node.js might be harder for .NET developers. It just doesn’t let you code in the same convenient manner. But there’s a solution too – try using Nest.js.

Nest Js is a progressive Node Js framework. Basically, it’s a complete toolkit for building scalable server-side apps. It’s pretty flexible thanks to its modular architecture; for instance, it lets you use various libraries in your development. Nest js makes use of the latest JavaScript features too, allowing you to try design patterns and more. 

The cherry on the top: it lets you see how your application may potentially look like in your personal browser. 

Myth 5. Node.js Isn’t Suitable for Building Complex Architecture  

That’s a huge misconception. Node.js is incredibly efficient when it comes to microservices or serverless architecture. The platform allows you to create highly scalable, robust web apps based on microservices. Over at Fulcrum Rocks development company, we always prefer using Node.js for microservices architecture. 

Microservices are becoming more and more popular. Thanks to microservices companies can be more agile: you can develop your app unit by unit, in different programming languages, using different frameworks and deploying them independently. For the record – Amazon, Netflix, PayPal have already implemented this. 

Meanwhile, monolithic architecture doesn’t let you scale easily. So if your app sees a traffic surge, you will need to upgrade your servers. But in a monolithic environment, everything should be scaled together. So even if just one part of your app can’t handle the load – you need to scale everything, wasting your resources. 

This is not the issue for microservices though – you simply need to scale the exact part separately, that’s it.

Myth 6. Node.js Isn’t Convenient for FinTech 

Fintech is number-sensitive. The problem is that Node.js is dynamically typed, so such silly things like ‘100′ + ’10’ will turn into ‘10010’. Small stuff like this can get pretty annoying. Yet, the issue can be solved pretty easily. All you need to do is just to use the libraries here and here.

What are the Benefits of Using Node.js?

1) Frequent Updates and Support

Node.js offers long-term support. Quite frequently the company releases security patches, performance optimizations and more. They quickly adopt modern JavaScript features, so you don’t have to worry about being left behind. 

All these updates make the development game easier, and keep your products in sync with the new tech. What’s more, Node.js promises to maintain any major Node.js release for 18 months since it’s made an LTS version. 

Besides, there’s a very large community around Node.js. There are tons of developers that add value to the network each day. 

2) Top-Notch Performance 

Node Js is asynchronous; it features single-threaded, event-driven architecture. Thus, it can handle tons of requests at the same time. The response time of your app becomes muuuuuuch faster. This saves your resources too – since you don’t have to fork out for advanced hardware. 

And because Node Js takes advantage of JavaScript, the transforming JSON data is fast from the get-go.

What’s more, Node.js works well with microservices architecture, and it helps you scale, dramatically.

Case in point: Currently Netflix experiences a sudden traffic surge due to the lockdown: many new users subscribed to the platform and they consume more content. Nevertheless, the company managed to scale quickly, thanks to its microservices architecture and Node.js. 

3) Single Package Manager 

npm registry comprises over 190,000 modules. It gives developers a chance to use lots of tools and modules in their work. They don’t have to write typical features from scratch, instead, they can use open-source ready-made solutions. So, is it surprising that PayPal reported 100% productivity boost after switching to Node Js?

Large companies might be worried about security. But as we stated above, npm released an enterprise edition too. You can take advantage of private registries and share the code safely w/ your team members. 

4) Node Js Is Easy to Adopt 

It’s easy to get a grip of Node Js – it’s based on JavaScript. It has similar principles, which means it’s pretty easy to learn for Java and .NET devs worldwide. It’s pretty convenient to grasp for beginners too. 

5) JSON Formats

JSON is a run-of-the-mill format for data interchange, it’s everywhere. So it’s pretty great that Node.js makes use of JSON (as opposed to Java objects). Node.js returns the data in JSON, which means that transforming JSON will be fast by default and you won’t need any additional parser for data processing.

Conclusion

Guess it’s clear now that nothing stands between your business and Node.js framework. Actually, it’s the opposite, since Node.js offers tons of gains helping you to boost your performance.



Source link

Roll Your Own Comments With Gatsby and FaunaDB
Strategy

Roll Your Own Comments With Gatsby and FaunaDB


If you haven’t used Gatsby before have a read about why it’s fast in every way that matters, and if you haven’t used FaunaDB before you’re in for a treat. If you’re looking to make your static sites full blown Jamstack applications this is the back end solution for you!

This tutorial will only focus on the operations you need to use FaunaDB to power a comment system for a Gatsby blog. The app comes complete with inputs fields that allow users to comment on your posts and an admin area for you to approve or delete comments before they appear on each post. Authentication is provided by Netlify’s Identity widget and it’s all sewn together using Netlify serverless functions and an Apollo/GraphQL API that pushes data up to a FaunaDB database collection.

I chose FaunaDB for the database for a number of reasons. Firstly there’s a very generous free tier! perfect for those small projects that need a back end, there’s native support for GraphQL queries and it has some really powerful indexing features!

…and to quote the creators;

No matter which stack you use, or where you’re deploying your app, FaunaDB gives you effortless, low-latency and reliable access to your data via APIs familiar to you

You can see the finished comments app here.

Get Started

To get started clone the repo at https://github.com/PaulieScanlon/fauna-gatsby-comments

or:

git clone https://github.com/PaulieScanlon/fauna-gatsby-comments.git

Then install all the dependencies:

npm install

Also cd in to functions/apollo-graphql and install the dependencies for the Netlify function:

npm install

This is a separate package and has its own dependencies, you’ll be using this later.

We also need to install the Netlify CLI as you’ll also use this later:

npm install netlify-cli -g

Now lets add three new files that aren’t part of the repo.

At the root of your project create a .env .env.development and .env.production

Add the following to .env:

GATSBY_FAUNA_DB =
GATSBY_FAUNA_COLLECTION =

Add the following to .env.development:

GATSBY_FAUNA_DB =
GATSBY_FAUNA_COLLECTION =
GATSBY_SHOW_SIGN_UP = true
GATSBY_ADMIN_ID =

Add the following to .env.production:

GATSBY_FAUNA_DB =
GATSBY_FAUNA_COLLECTION =
GATSBY_SHOW_SIGN_UP = false
GATSBY_ADMIN_ID =

You’ll come back to these later but in case you’re wondering

  • GATSBY_FAUNA_DB is the FaunaDB secret key for your database
  • GATSBY_FAUNA_COLLECTION is the FaunaDB collection name
  • GATSBY_SHOW_SIGN_UP is used to hide the Sign up button when the site is in production
  • GATSBY_ADMIN_ID is a user id that Netlify Identity will generate for you

If you’re the curious type you can get a taster of the app by running gatsby develop or yarn develop and then navigate to http://localhost:8000 in your browser.

FaunaDB

So Let’s get cracking, but before we write any operations head over to https://fauna.com/ and sign up!

Database and Collection

  • Create a new database by clicking NEW DATABASE
  • Name the database: I’ve called the demo database fauna-gatsby-comments
  • Create a new Collection by clicking NEW COLLECTION
  • Name the collection: I’ve called the demo collection demo-blog-comments

Server Key

Now you’ll need to to set up a server key. Go to SECURITY

  • Create a new key by clicking NEW KEY
  • Select the database you want the key to apply to, fauna-gatsby-comments for example
  • Set the Role as Admin
  • Name the server key: I’ve called the demo key demo-blog-server-key

Environment Variables Pt. 1

Copy the server key and add it to GATSBY_FAUNA_DB in .env.development, .env.production and .env.

You’ll also need to add the name of the collection to GATSBY_FAUNA_COLLECTION in .env.development, .env.production and .env.

Adding these values to .env are just so you can test your development FaunaDB operations, which you’ll do next.

Let’s start by creating a comment so head back to boop.js:

// boop.js
...
// CREATE COMMENT
createComment: async () => {
  const slug = "/posts/some-post"
  const name = "some name"
  const comment = "some comment"
  const results = await client.query(
    q.Create(q.Collection(COLLECTION_NAME), {
      data: {
        isApproved: false,
        slug: slug,
        date: new Date().toString(),
        name: name,
        comment: comment,
      },
    })
  )
  console.log(JSON.stringify(results, null, 2))
  return {
    commentId: results.ref.id,
  }
},
...

The breakdown of this function is as follows;

  • q is the instance of faunadb.query
  • Create is the FaunaDB method to create an entry within a collection
  • Collection is area in the database to store the data. It takes the name of the collection as the first argument and a data object as the second.

The second argument is the shape of the data you need to drive the applications comment system.

For now you’re going to hard-code slugname and comment but in the final app these values are captured by the input form on the posts page and passed in via args

The breakdown for the shape is as follows;

  • isApproved is the status of the comment and by default it’s false until we approve it in the Admin page
  • slug is the path to the post where the comment was written
  • date is the time stamp the comment was written
  • name is the name the user entered in the comments from
  • comment is the comment the user entered in the comments form

When you (or a user) creates a comment you’re not really interested in dealing with the response because as far as the user is concerned all they’ll see is either a success or error message.

After a user has posted a comment it will go in to your Admin queue until you approve it but if you did want to return something you could surface this in the UI by returning something from the createComment function.

Create a comment

If you’ve hard coded a slugname and comment you can now run the following in your CLI

node boop createComment

If everything worked correctly you should see a log in your terminal of the new comment.

{
   "ref": {
     "@ref": {
       "id": "263413122555970050",
       "collection": {
         "@ref": {
           "id": "demo-blog-comments",
           "collection": {
             "@ref": {
               "id": "collections"
             }
           }
         }
       }
     }
   },
   "ts": 1587469179600000,
   "data": {
     "isApproved": false,
     "slug": "/posts/some-post",
     "date": "Tue Apr 21 2020 12:39:39 GMT+0100 (British Summer Time)",
     "name": "some name",
     "comment": "some comment"
   }
 }
 { commentId: '263413122555970050' }

If you head over to COLLECTIONS in FaunaDB you should see your new entry in the collection.

You’ll need to create a few more comments while in development so change the hard-coded values for name and comment and run the following again.

node boop createComment

Do this a few times so you end up with at least three new comments stored in the database, you’ll use these in a moment.

Delete comment by id

Now that you can create comments you’ll also need to be able to delete a comment.

By adding the commentId of one of the comments you created above you can delete it from the database. The commentId is the id in the [email protected] object

Again you’re not really concerned with the return value here but if you wanted to surface this in the UI you could do so by returning something from the deleteCommentById function.

// boop.js
...
// DELETE COMMENT
deleteCommentById: async () => {
  const commentId = "263413122555970050";
  const results = await client.query(
    q.Delete(q.Ref(q.Collection(COLLECTION_NAME), commentId))
  );
  console.log(JSON.stringify(results, null, 2));
  return {
    commentId: results.ref.id,
  };
},
...

The breakdown of this function is as follows

  • client is the FaunaDB client instance
  • query is a method to get data from FaunaDB
  • q is the instance of faunadb.query
  • Delete is the FaunaDB delete method to delete entries from a collection
  • Ref is the unique FaunaDB ref used to identify the entry
  • Collection is area in the database where the data is stored

If you’ve hard coded a commentId you can now run the following in your CLI:

node boop deleteCommentById

If you head back over to COLLECTIONS in FaunaDB you should see that entry no longer exists in collection

Indexes

Next you’re going to create an INDEX in FaunaDB.

An INDEX allows you to query the database with a specific term and define a specific data shape to return.

When working with GraphQL and / or TypeScript this is really powerful because you can use FaunaDB indexes to return only the data you need and in a predictable shape. This makes data typing responses in GraphQL and / TypeScript a dream… I’ve worked on a number of applications that just return a massive object of useless values which will inevitably cause bugs in your app. blurg!

  • Go to INDEXES and click NEW INDEX
  • Name the index: I’ve called this one get-all-comments
  • Set the source collection to the name of the collection you setup earlier

As mentioned above when you query the database using this index you can tell FaunaDB which parts of the entry you want to return.

You can do this by adding “values” but be careful to enter the values exactly as they appear below because (on the FaunaDB free tier) you can’t amend these after you’ve created them so if there’s a mistake you’ll have to delete the index and start again… bummer!

The values you need to add are as follows:

  • ref
  • data.isApproved
  • data.slug
  • data.date
  • data.name
  • data.comment

After you’ve added all the values you can click SAVE.

Get all comments

// boop.js
...
// GET ALL COMMENTS
getAllComments: async () => {
   const results = await client.query(
     q.Paginate(q.Match(q.Index("get-all-comments")))
   );
   console.log(JSON.stringify(results, null, 2));
   return results.data.map(([ref, isApproved, slug, date, name, comment]) => ({
     commentId: ref.id,
     isApproved,
     slug,
     date,
     name,
     comment,
   }));
 },
...

The breakdown of this function is as follows

  • client is the FaunaDB client instance
  • query is a method to get data from FaunaDB
  • q is the instance of faunadb.query
  • Paginate paginates the responses
  • Match returns matched results
  • Index is the name of the Index you just created

The shape of the returned result here is an array of the same shape you defined in the Index “values”

If you run the following you should see the list of all the comments you created earlier:

node boop getAllComments

Get comments by slug

You’re going to take a similar approach as above but this time create a new Index that allows you to query FaunaDB in a different way. The key difference here is that when you get-comments-by-slug you’ll need to tell FaunaDB about this specific term and you can do this by adding data.slug to the Terms field.

  • Go to INDEX and click NEW INDEX
  • Name the index, I’ve called this one get-comments-by-slug
  • Set the source collection to the name of the collection you setup earlier
  • Add data.slug in the terms field

The values you need to add are as follows:

  • ref
  • data.isApproved
  • data.slug
  • data.date
  • data.name
  • data.comment

After you’ve added all the values you can click SAVE.

// boop.js
...
// GET COMMENT BY SLUG
getCommentsBySlug: async () => {
  const slug = "/posts/some-post";
  const results = await client.query(
    q.Paginate(q.Match(q.Index("get-comments-by-slug"), slug))
  );
  console.log(JSON.stringify(results, null, 2));
  return results.data.map(([ref, isApproved, slug, date, name, comment]) => ({
    commentId: ref.id,
    isApproved,
    slug,
    date,
    name,
    comment,
  }));
},
...

The breakdown of this function is as follows:

  • client is the FaunaDB client instance
  • query is a method to get data from FaunaDB
  • q is the instance of faunadb.query
  • Paginate paginates the responses
  • Match returns matched results
  • Index is the name of the Index you just created

The shape of the returned result here is an array of the same shape you defined in the Index “values” you can create this shape in the same way you did above and be sure to add a value for terms. Again be careful to enter these with care.

If you run the following you should see the list of all the comments you created earlier but for a specific slug:

node boop getCommentsBySlug

Approve comment by id

When you create a comment you manually set the isApproved value to false. This prevents the comment from being shown in the app until you approve it.

You’ll now need to create a function to do this but you’ll need to hard-code a commentId. Use a commentId from one of the comments you created earlier:

// boop.js
...
// APPROVE COMMENT BY ID
approveCommentById: async () => {
  const commentId = '263413122555970050'
  const results = await client.query(
    q.Update(q.Ref(q.Collection(COLLECTION_NAME), commentId), {
      data: {
        isApproved: true,
      },
    })
  );
  console.log(JSON.stringify(results, null, 2));
  return {
    isApproved: results.isApproved,
  };
},
...

The breakdown of this function is as follows:

  • client is the FaunaDB client instance
  • query is a method to get data from FaunaDB
  • q is the instance of faunadb.query
  • Update is the FaundaDB method up update an entry
  • Ref is the unique FaunaDB ref used to identify the entry
  • Collection is area in the database where the data is stored

If you’ve hard coded a commentId you can now run the following in your CLI:

node boop approveCommentById

If you run the getCommentsBySlug again you should now see the isApproved status of the entry you hard-coded the commentId for will have changed to true.

node boop getCommentsBySlug

These are all the operations required to manage the data from the app.

In the repo if you have a look at apollo-graphql.js which can be found in functions/apollo-graphql you’ll see the all of the above operations. As mentioned before the hard-coded values are replaced by args, these are the values passed in from various parts of the app.

Netlify

Assuming you’ve completed the Netlify sign up process or already have an account with Netlify you can now push the demo app to your GitHub account.

To do this you’ll need to have initialize git locally, added a remote and have pushed the demo repo upstream before proceeding.

You should now be able to link the repo up to Netlify’s Continuous Deployment.

If you click the “New site from Git” button on the Netlify dashboard you can authorize access to your GitHub account and select the gatsby-fauna-comments repo to enable Netlify’s Continuous Deployment. You’ll need to have deployed at least once so that we have a pubic URL of your app.

The URL will look something like this https://ecstatic-lewin-b1bd17.netlify.app but feel free to rename it and make a note of the URL as you’ll need it for the Netlify Identity step mentioned shortly.

Environment Variables Pt. 2

In a previous step you added the FaunaDB database secret key and collection name to your .env files(s). You’ll also need to add the same to Netlify’s Environment variables.

  • Navigate to Settings from the Netlify navigation
  • Click on Build and deploy
  • Either select Environment or scroll down until you see Environment variables
  • Click on Edit variables

Proceed to add the following:

GATSBY_SHOW_SIGN_UP = false
GATSBY_FAUNA_DB = you FaunaDB secret key
GATSBY_FAUNA_COLLECTION = you FaunaDB collection name

While you’re here you’ll also need to amend the Sensitive variable policy, select Deploy without restrictions

Netlify Identity Widget

I mentioned before that when a comment is created the isApproved value is set to false, this prevents comments from appearing on blog posts until you (the admin) have approved them. In order to become admin you’ll need to create an identity.

You can achieve this by using the Netlify Identity Widget.

If you’ve completed the Continuous Deployment step above you can navigate to the Identity page from the Netlify navigation.

You wont see any users in here just yet so lets use the app to connect the dots, but before you do that make sure you click Enable Identity

Before you continue I just want to point out you’ll be using netlify dev instead of gatsby develop or yarn develop from now on. This is because you’ll be using some “special” Netlify methods in the app and staring the server using netlify dev is required to spin up various processes you’ll be using.

  • Spin up the app using netlify dev
  • Navigate to http://localhost:8888/admin/
  • Click the Sign Up button in the header

You will also need to point the Netlify Identity widget at your newly deployed app URL. This was the URL I mentioned you’ll need to make a note of earlier, if you’ve not renamed your app it’ll look something like this https://ecstatic-lewin-b1bd17.netlify.app/ There will be a prompt in the pop up window to Set site’s URL.

You can now complete the necessary sign up steps.

After sign up you’ll get an email asking you to confirm you identity and once that’s completed refresh the Identity page in Netlify and you should see yourself as a user.

It’s now login time, but before you do this find Identity.js in src/components and temporarily un-comment the console.log() on line 14. This will log the Netlify Identity user object to the console.

  • Restart your local server
  • Spin up the app again using netlify dev
  • Click the Login button in the header

If this all works you should be able to see a console log for netlifyIdentity.currentUser: find the id key and copy the value.

Set this as the value for GATSBY_ADMIN_ID = in both .env.production and .env.development

You can now safely remove the console.log() on line 14 in Identity.js or just comment it out again.

GATSBY_ADMIN_ID = your Netlify Identity user id

…and finally

  • Restart your local server
  • Spin up the app again using netlify dev

Now you should be able to login as “Admin”… hooray!

Navigate to http://localhost:8888/admin/ and Login.

It’s important to note here you’ll be using localhost:8888 for development now and NOT localhost:8000 which is more common with Gatsby development

Before you test this in the deployed environment make sure you go back to Netlify’s Environment variables and add your Netlify Identity user id to the Environment variables!

  • Navigate to Settings from the Netlify navigation
  • Click on Build and deploy
  • Either select Environment or scroll down until you see Environment variables
  • Click on Edit variables

Proceed to add the following:

GATSBY_ADMIN_ID = your Netlify Identity user id

If you have a play around with the app and enter a few comments on each of the posts then navigate back to Admin page you can choose to either approve or delete the comments.

Naturally only approved comments will be displayed on any given post and deleted ones are gone forever.

If you’ve used this tutorial for your project I’d love to hear from you at @pauliescanlon.


By Paulie Scanlon (@pauliescanlon), Front End React UI Developer / UX Engineer: After all is said and done, structure + order = fun.

Visit Paulie’s Blog at: www.paulie.dev





Source link

Post image
Strategy

New Database UI and TypeScript & Flutter Support : webdev


After six long weeks of work since the last Appwrite release and about ~250 commits, Appwrite 0.6 is out with TypeScript support, a new Flutter integration, a new database UI, and many more features and improvements.

If you haven’t heard about Appwrite before, It’s an open-source BAAS (backend-as-a-service) that abstracts a lot of the complexity and repetitiveness required when building an API from scratch. The server comes packaged as a set of Docker containers you can host anywhere really quickly, and it has lots of built-in security features.

You can read the full announcements on Medium: https://medium.com/@eldadfux/introducing-appwrite-0-6-with-flutter-support-1eb4dce820f3?sk=8f3b0ff0446fdb667d31d558bc540456 or you can watch the online announcement at the *live* meetup we held yesterday: https://www.youtube.com/watch?v=KNQzncq10CI

If you think Appwrite can be a good fit for your next project, you can learn more about it at the official website or on the GitHub repository:

https://appwrite.io
https://github.com/appwrite/appwrite

Post image
Post image
Post image
Post image
Post image
Post image



Source link

An array field with references to category documents in the blog studio
Strategy

How to Make Taxonomy Pages With Gatsby and Sanity.io


In this tutorial, we’ll cover how to make taxonomy pages with Gatsby with structured content from Sanity.io. You will learn how to use Gatsby’s Node creation APIs to add fields to your content types in Gatsby’s GraphQL API. Specifically, we’re going to create category pages for the Sanity’s blog starter.

That being said, there is nothing Sanity-specific about what we’re covering here. You’re able to do this regardless of which content source you may have. We’re just reaching for Sanity.io for the sake of demonstration.

Get up and running with the blog

If you want to follow this tutorial with your own Gatsby project, go ahead and skip to the section for creating a new page template in Gatsby. If not, head over to sanity.io/create and launch the Gatsby blog starter. It will put the code for Sanity Studio and the Gatsby front-end in your GitHub account and set up the deployment for both on Netlify. All the configuration, including example content, will be in place so that you can dive right into learning how to create taxonomy pages.

Once the project is iniated, make sure to clone the new repository on GitHub to local, and install the dependencies:

git clone [email protected]:username/your-repository-name.git
cd your-repository-name
npm i

If you want to run both Sanity Studio (the CMS) and the Gatsby front-end locally, you can do so by running the command npm run dev in a terminal from the project root. You can also cd into the web folder and just run Gatsby with the same command.

You should also install the Sanity CLI and log in to your account from the terminal: npm i -g @sanity/cli && sanity login. This will give you tooling and useful commands to interact with Sanity projects. You can add the --help flag to get more information on its functionality and commands.

We will be doing some customization to the gatsby-node.js file. To see the result of the changes, restart Gatsby’s development server. This is done in most systems by hitting CTRL + C in the terminal and running npm run dev again.

Getting familiar with the content model

Look into the /studio/schemas/documents folder. There are schema files for our main content types: author, category, site settings, and posts. Each of the files exports a JavaScript object that defines the fields and properties of these content types. Inside of post.js is the field definition for categories:

{
  name: 'categories',
  type: 'array',
  title: 'Categories',
  of: [
    {
      type: 'reference',
      to: {
        type: 'category'
      }
    }
  ]
},

This will create an array field with reference objects to category documents. Inside of the blog’s studio it will look like this:

An array field with references to category documents in the blog studio
An array field with references to category documents in the blog studio

Adding slugs to the category type

Head over to /studio/schemas/documents/category.js. There is a simple content model for a category that consists of a title and a description. Now that we’re creating dedicated pages for categories, it would be handy to have a slug field as well. We can define that in the schema like this:

// studio/schemas/documents/category.js
export default {
  name: 'category',
  type: 'document',
  title: 'Category',
  fields: [
    {
      name: 'title',
      type: 'string',
      title: 'Title'
    },
    {
      name: 'slug',
      type: 'slug',
      title: 'Slug',
      options: {
        // add a button to generate slug from the title field
        source: 'title'
      }
    },
    {
      name: 'description',
      type: 'text',
      title: 'Description'
    }
  ]
}

Now that we have changed the content model, we need to update the GraphQL schema definition as well. Do this by executing npm run graphql-deploy (alternatively: sanity graphql deploy) in the studio folder. You will get warnings about breaking changes, but since we are only adding a field, you can proceed without worry. If you want the field to accessible in your studio on Netlify, check the changes into git (with git add . && git commit -m"add slug field") and push it to your GitHub repository (git push origin master).

Now we should go through the categories and generate slugs for them. Remember to hit the publish button to make the changes accessible for Gatsby! And if you were running Gatsby’s development server, you’ll need to restart that too.

Quick sidenote on how the Sanity source plugin works

When starting Gatsby in development or building a website, the source plugin will first fetch the GraphQL Schema Definitions from Sanity deployed GraphQL API. The source plugin uses this to tell Gatsby which fields should be available to prevent it from breaking if the content for certain fields happens to disappear. Then it will hit the project’s export endpoint, which streams all the accessible documents to Gatsby’s in-memory datastore.

In order words, the whole site is built with two requests. Running the development server, will also set up a listener that pushes whatever changes come from Sanity to Gatsby in real-time, without doing additional API queries. If we give the source plugin a token with permission to read drafts, we’ll see the changes instantly. This can also be experienced with Gatsby Preview.

Adding a category page template in Gatsby

Now that we have the GraphQL schema definition and some content ready, we can dive into creating category page templates in Gatsby. We need to do two things:

  • Tell Gatsby to create pages for the category nodes (that is Gatsby’s term for “documents”).
  • Give Gatsby a template file to generate the HTML with the page data.

Begin by opening the /web/gatsby-node.js file. Code will already be here that can be used to create the blog post pages. We’ll largely leverage this exact code, but for categories. Let’s take it step-by-step:

Between the createBlogPostPages function and the line that starts with exports.createPages, we can add the following code. I’ve put in comments here to explain what’s going on:

// web/gatsby-node.js

// ...

async function createCategoryPages (graphql, actions) {
  // Get Gatsby‘s method for creating new pages
  const {createPage} = actions
  // Query Gatsby‘s GraphAPI for all the categories that come from Sanity
  // You can query this API on http://localhost:8000/___graphql
  const result = await graphql(`{
    allSanityCategory {
      nodes {
        slug {
          current
        }
        id
      }
    }
  }
  `)
  // If there are any errors in the query, cancel the build and tell us
  if (result.errors) throw result.errors

  // Let‘s gracefully handle if allSanityCatgogy is null
  const categoryNodes = (result.data.allSanityCategory || {}).nodes || []

  categoryNodes
    // Loop through the category nodes, but don't return anything
    .forEach((node) => {
      // Desctructure the id and slug fields for each category
      const {id, slug = {}} = node
      // If there isn't a slug, we want to do nothing
      if (!slug) return

      // Make the URL with the current slug
      const path = `/categories/${slug.current}`

      // Create the page using the URL path and the template file, and pass down the id
      // that we can use to query for the right category in the template file
      createPage({
        path,
        component: require.resolve('./src/templates/category.js'),
        context: {id}
      })
    })
}

Last, this function is needed at the bottom of the file:

// /web/gatsby-node.js

// ...

exports.createPages = async ({graphql, actions}) => {
  await createBlogPostPages(graphql, actions)
  await createCategoryPages(graphql, actions) // <= add the function here
}

Now that we have the machinery to create the category page node in place, we need to add a template for how it actually should look in the browser. We’ll base it on the existing blog post template to get some consistent styling, but keep it fairly simple in the process.

// /web/src/templates/category.js
import React from 'react'
import {graphql} from 'gatsby'
import Container from '../components/container'
import GraphQLErrorList from '../components/graphql-error-list'
import SEO from '../components/seo'
import Layout from '../containers/layout'

export const query = graphql`
  query CategoryTemplateQuery($id: String!) {
    category: sanityCategory(id: {eq: $id}) {
      title
      description
    }
  }
`
const CategoryPostTemplate = props => {
  const {data = {}, errors} = props
  const {title, description} = data.category || {}

  return (
    <Layout>
      <Container>
        {errors && <GraphQLErrorList errors={errors} />}
        {!data.category && <p>No category data</p>}
        <SEO title={title} description={description} />
        <article>
          <h1>Category: {title}</h1>
          <p>{description}</p>
        </article>
      </Container>
    </Layout>
  )
}

export default CategoryPostTemplate

We are using the ID that was passed into the context in gatsby-node.js to query the category content. Then we use it to query the title and description fields that are on the category type. Make sure to restart with npm run dev after saving these changes, and head over to localhost:8000/categories/structured-content in the browser. The page should look something like this:

A barebones category page with a site title, Archive link, page title, dummy content and a copyright in the footer.
A barebones category page

Cool stuff! But it would be even cooler if we actually could see what posts that belong to this category, because, well, that’s kinda the point of having categories in the first place, right? Ideally, we should be able to query for a “pages” field on the category object.

Before we learn how to that, we need to take a step back to understand how Sanity’s references work.

Querying Sanity’s references

Even though we’re only defining the references in one type, Sanity’s datastore will index them “bi-directionally.” That means creating a reference to the “Structured content” category document from a post lets Sanity know that the category has these incoming references and will keep you from deleting it as long as the reference exists (references can be set as “weak” to override this behavior). If we use GROQ, we can query categories and join posts that have them like this (see the query and result in action on groq.dev):

*[_type == "category"]{
  _id,
  _type,
  title,
  "posts": *[_type == "post" && references(^._id)]{
    title,
    slug
  }
}
// alternative: *[_type == "post" && ^._id in categories[]._ref]{

This ouputs a data structure that lets us make a simple category post template:

[
  {
    "_id": "39d2ca7f-4862-4ab2-b902-0bf10f1d4c34",
    "_type": "category",
    "title": "Structured content",
    "posts": [
      {
        "title": "Exploration powered by structured content",
        "slug": {
          "_type": "slug",
          "current": "exploration-powered-by-structured-content"
        }
      },
      {
        "title": "My brand new blog powered by Sanity.io",
        "slug": {
          "_type": "slug",
          "current": "my-brand-new-blog-powered-by-sanity-io"
        }
      }
    ]
  },
  // ... more entries
]

That’s fine for GROQ, what about GraphQL?

Here‘s the kicker: As of yet, this kind of query isn’t possible with Gatsby’s GraphQL API out of the box. But fear not! Gatsby has a powerful API for changing its GraphQL schema that lets us add fields.

Using createResolvers to edit Gatsby’s GraphQL API

Gatsby holds all the content in memory when it builds your site and exposes some APIs that let us tap into how it processes this information. Among these are the Node APIs. It’s probably good to clarify that when we are talking about “node” in Gatsby — not to be confused with Node.js. The creators of Gatsby have borrowed “edges and nodes” from Graph theory where “edges” are the connections between the “nodes” which are the “points” where the actual content is located. Since an edge is a connection between nodes, it can have a “next” and “previous” property.

The edges with next and previous, and the node with fields in GraphQL’s API explorer
The edges with next and previous, and the node with fields in GraphQL’s API explorer

The Node APIs are used by plugins first and foremost, but they can be used to customize how our GraphQL API should work as well. One of these APIs is called createResolvers. It’s fairly new and it lets us tap into how a type’s nodes are created so we can make queries that add data to them.

Let’s use it to add the following logic:

  • Check for ones with the SanityCategory type when creating the nodes.
  • If a node matches this type, create a new field called posts and set it to the SanityPost type.
  • Then run a query that filters all posts that has lists a category that matches the current category’s ID.
  • If there are matching IDs, add the content of the post nodes to this field.

Add the following code to the /web/gatsby-node.js file, either below or above the code that’s already in there:

// /web/gatsby-node.js
// Notice the capitalized type names
exports.createResolvers = ({createResolvers}) => {
  const resolvers = {
    SanityCategory: {
      posts: {
        type: ['SanityPost'],
        resolve (source, args, context, info) {
          return context.nodeModel.runQuery({
            type: 'SanityPost',
            query: {
              filter: {
                categories: {
                  elemMatch: {
                    _id: {
                      eq: source._id
                    }
                  }
                }
              }
            }
          })
        }
      }
    }
  }
  createResolvers(resolvers)
}

Now, let’s restart Gatsby’s development server. We should be able to find a new field for posts inside of the sanityCategory and allSanityCategory types.

A GraphQL query for categories with the category title and the titles of the belonging posts

Adding the list of posts to the category template

Now that we have the data we need, we can return to our category page template (/web/src/templates/category.js) and add a list with links to the posts belonging to the category.

// /web/src/templates/category.js
import React from 'react'
import {graphql, Link} from 'gatsby'
import Container from '../components/container'
import GraphQLErrorList from '../components/graphql-error-list'
import SEO from '../components/seo'
import Layout from '../containers/layout'
// Import a function to build the blog URL
import {getBlogUrl} from '../lib/helpers'

// Add “posts” to the GraphQL query
export const query = graphql`
  query CategoryTemplateQuery($id: String!) {
    category: sanityCategory(id: {eq: $id}) {
      title
      description
      posts {
        _id
        title
        publishedAt
        slug {
          current
        }
      }
    }
  }
`
const CategoryPostTemplate = props => {
  const {data = {}, errors} = props
  // Destructure the new posts property from props
  const {title, description, posts} = data.category || {}

  return (
    <Layout>
      <Container>
        {errors && <GraphQLErrorList errors={errors} />}
        {!data.category && <p>No category data</p>}
        <SEO title={title} description={description} />
        <article>
          <h1>Category: {title}</h1>
          <p>{description}</p>
          {/*
            If there are any posts, add the heading,
            with the list of links to the posts
          */}
          {posts && (
            <React.Fragment>
              <h2>Posts</h2>
              <ul>
                { posts.map(post => (
                  <li key={post._id}>
                    <Link to={getBlogUrl(post.publishedAt, post.slug)}>{post.title}</Link>
                  </li>))
                }
              </ul>
            </React.Fragment>)
          }
        </article>
      </Container>
    </Layout>
  )
}

export default CategoryPostTemplate

This code will produce this simple category page with a list of linked posts – just liked we wanted!

The category page with the category title and description, as well as a list of its posts

Go make taxonomy pages!

We just completed the process of creating new page types with custom page templates in Gatsby. We covered one of Gatsby’s Node APIs called createResolver and used it to add a new posts field to the category nodes.

This should give you what you need to make other types of taxonomy pages! Do you have multiple authors on your blog? Well, you can use the same logic to create author pages. The interesting thing with the GraphQL filter is that you can use it to go beyond the explicit relationship made with references. It can also be used to match other fields using regular expressions or string comparisons. It’s fairly flexible!



Source link

V8 engine graphic
Strategy

V8 JavaScript Engine: The Non-Stop Improvement


V8 is not only a famous eight-cylinder engine you can find in Dodge Charger, Bentley Continental GT, or Boss Hoss motorcycles. In 2008 The Chromium Project developers released a new JavaScript and WebAssembly engine with the same name – V8, such a groovy reference to the engineering marvel. So one more Vee-eight engine was born. 

One of the interesting properties of JavaScript and the reason why V8 being used for today is that it’s platform-independent. 

Lars Bak, danish programmer, the tech lead of V8 project

What Is a JavaScript Engine

In short, JS engines are programs converting JavaScript code into low-level or machine code. They follow the ECMAScript Standards that define features and execution process.

The same as V8 was a remarkable piece of machinery, V8 JS engine found a niche for itself. Most likely you have already “met” Vee-eight face-to-face. As a part of Chrome, this engine runs the JavaScript when you visit a web page. In other words, V8 provides the runtime environment for JS. And the web-platform APIs (Application Programming Interfaces) are already provided by the browser. Except for browsers V8 is embedded in such server-side technologies, as Node.js, MongoDB, and Couchbase.

V8 is written in C++, and can run standalone or be embedded into the C++ app.

It is portable and runs on:

  • Windows 7 or later
  • macOS 10.12+
  • Linux systems using x64, IA-32, MIPS and ARM processor

Among the famous JavaScript engines except V8 are:

  • SpiderMonkey – Firefox
  • JavaScriptCore (Nitro) – Safari
  • Chakra JS – Microsoft Edge

It doesn’t matter if you run it in the browser or Node.js or an IoT device: to go from something that you write to executing that – that’s what the engines are doing. JS engines are the heart of everything that we do.

Franziska Hinkelmann, Senior Engineer at Google

Prehistory: Let’s Start the Engine

JavaScript is the most popular scripting language for the web today, JS modules are supported in all major browsers. And it’s a great achievement that V8 is independent of the browser in which it’s hosted. How did this happen?

This open-source JS engine came into being with the Chromium Project for Google Chrome and more Chromium browsers. Lars Bak, a danish programmer, was a project’s creator and he’s the one who led the V8

 engine room 
team. This man is a true virtual machine expert and guru of object-oriented design. By the way, Lars Bak spent 30 years developing programming languages. Once upon a time, he implemented a runtime system for BETA. Since then, Mr. Bak has left marks on a dramatic list of different software systems and finally get to the V8. How it was?

Autumn, 2006. Google hired Lars Bak for building a new JavaScript engine aimed at the Chrome browser. The team focused on building the fastest JS runtime worldwide. For such a dynamic, loosely typed language it was a real feat. The new JS runtime was named “V8” – such allusion to the famous powerful muscle car engine.

V8 engine graphic

Supported and financed by Google, V8 engine powers today a huge amount of server-side JS code.

P.S.: After V8 Lars Bak has already realized Dart and Toit and received the Dahl-Nygaard prize in 2018. Well, that track record sounds impressive!

What Goes on Under the Hood

 It is іnteresting that V8’s subprocesses are named in accordance with automotive details. That is not only a stylish brand idea. It’s also a good way for users to gain insight into JS engine behavior.

I like how they changed the names of the processing of the V8 engine to stuff like “ignition” and “turbofan”. It’s easier to remember because it’s like a car engine now.

Ksee, YouTube user

And what is exactly happens with JavaScript parsed by the V8?

In the basic terms, JS engine:

1. Taking your

 fuel 
source code
2. The parser is generating an abstract syntax tree from the source
3. V8’s interpreter is generating a bytecode from the syntax tree that a compiler can understand
4. V8’s compiler is generating a graph from bytecode (replacing bytecode sections with optimized machine code)
5. And, ta-dah – executing!

How V8 works graphic

And what is making code runs so fast? Let’s consider some interesting V8’s characteristics. 

JS is usually perceived as an interpreted language, but its modern engines are much more than just interpreters in order to get a more performant execution. The basis of the V8 that allows a high-speed JS executing is the JIT (Just-In-Time) compiler optimizing code during runtime (not Ahead-Of-Time). It combines the best features of interpreters and compilers, mixing these steps and making translation and execution faster.

The first optimizing compiler of V8 was “FullCodegen”. The newest and more advanced is “Turbofan”. Its backend is used by the V8’s low-level register-based interpreter called ‘Ignition’. This combined Ignition+TurboFan pipeline was launched in 2017.

In 2018 was released Liftoff, WebAssembly’s (Wasm) first-tier compiler in V8 for a fast startup of complex websites with big Wasm modules, for example, Google Earth.

  • Keep calm and maintain cleanliness

Over the past years, V8’s developers worked on the garbage collection process improvement a lot. Finally, they implemented a 2 generation-based garbage collector (also known as a full GC) called “Orinoco”. It applies the latest effective techniques to free the thread. Collector finds objects and data that are no longer referenced and collects them. This contributes to well-improved latency and page load, smoother animation, scrolling, and user interaction.

Also, there is an efficient memory management system at V8’s disposal. It allows fast allocation and minimal process while running JavaScript what means a lack of latency and hiccups using JS inside the browser.

In 2018, the Chromium team started a project called V8 Lite. The main aim was a forcefully reducing memory usage. 

Originally, it was intended as a
Lite mode for low-memory devices or embedded use-cases. But soon developers decided to fully implement that optimization bonuses in the regular V8, advancing all vee-eight usage areas. You can consult technical details of memory savings and improved execution speed without wizardry
in V8’s official blog.

The main goal of engine development is to make JavaScript run as fast as possible. Developers emphasize that one of the crucial tasks is improving a distributed system where it is possible to shut down the individual unit and the rest of the units take over the functionality. That makes the systems more robust. It can be compared to cloud systems architecture, where one can tolerate a single device crashing, while the overall system runs smoothly.

Our philosophy is that if you make a quick feedback loop, from programming to receiving feedback from the running system, in under a second, it inspires the programmer to experiment and invent new things.

Lars Bak, danish programmer, the tech lead of V8 project

Vroom Vroom: Drive on!

The famous Peter Drucker’s quote “The overwhelming majority of successful innovations exploit change” is especially relevant in the world of JavaScript. Every 6 weeks Chromium’s team creates a new branch of V8 engine as part of their release process.
Here you can check the news. The newest V8’s v. 8.1  was released on February 25th, 2020.

All Roads Lead to Chrome

All v.8.1 highlights are especially enjoyable in anticipation of the new Chrome release. Chrome 80 Stable was released on February 4th, 2020,
as reported by Chromium. So let’s check bug fixes and performance improvements and
drive on with V8 JS in 2020!



Source link

Post image
Strategy

Can the company force me to be full-stack if I was hired as …


Hey guys,

A year ago I was hired as a junior developer at a mid-large software company in the UK. The job title of the job advert that I applied for was called “Junior Front-End Developer” and these were the responsibilities:

Post image

Job spec

When I was hired, my official role within the company was “Junior Developer”. When I started, I was immediately being forced to do full-stack without any explanation. My workload was approximately half front-end and half back-end. However, I didn’t say anything and just kinda dealt with it as the pay was way above what 90% of the companies were offering at the time.

I started being more vocal about the role and my career interest in the past couple of months since I “established” myself pretty well. However, it hasn’t been going as I anticipated. Today I received an ultimatum to whether accept “accept that the role has evolved to meet company needs” or be put on HR Performance Improvement Program.

So as I understand it’s a choice between being forced to be full-stack or get fired? How does this work in terms of legal stuff?



Source link

How to Implement File Upload in Angular
Strategy

How to Implement File Upload in Angular


Image Source

Uploading files is an integral part of most projects. However, when considering a file upload method, you should carefully assess the needs of your project. You can manually upload files using Angular components, like FormData, HttpClientModule, and reactive forms. Each of these components will serve different purposes. 

In this article, you will learn about popular Angular components for file upload, including a quick tutorial on how to implement file upload in Angular 9.

What Is Angular?

Angular is a development platform and framework that you can use to create single-page applications in JavaScript (JS) or TypeScript (TS). The framework is written in TS and is implemented through libraries that you can import into your projects. 

The basic structure of Angular is based on NgModules, collections of components organized into functional sets. These modules enable you to define your Angular applications and integrate various components. Each application you create in Angular has a root module, which enables bootstrapping, and however many feature modules you need. 

Within each module are components. These components define the views that are available for use in your application. Views are sets of screen elements that you can apply code to. Additionally, components include services. Services are classes that provide functionality and enable you to create efficient modular applications.

When you use components and the services inside, you are reliant on metadata to define types and usage. The metadata is what associates components with view templates, combining HTML with Angular directives and markup. This then enables you to modify HTML before rendering. Metadata is also how Angular makes services available via dependency injection.

Angular Components for File Upload

Within Angular, there are several components that you can use to achieve file uploads manually. Depending on how you want to use uploads, you may need to modify your use of these methods or you may be able to simply adopt pre-built file upload methods. For example, if you are using a digital asset management tool, many solutions provide methods you can add easily. 

Below are the elements commonly used to accomplish file uploads with Angular.

FormData

FormData is an object you can use to store key-value pairs. It enables you to build an object that aligns with an HTML form. This functionality allows you to send data, such as file uploads, to REST API endpoints via HTTP client libraries or the XMLHttpRequest interface. 

To create a FormData object you can use the following:

This method enables you to directly add key-values or to collect data from your existing HTML form. 

HttpClientModule

HttpClientModule is a module that contains an API you can use to send and obtain data for your application from HTTP servers. It is based on the XMLHttpRequest interface. It includes features that enable you to avoid having to extract JSON data, use interceptors to modify request headers, and add interceptors to provider headers.

You can import this module by adding the following to your JSON package file:

Reactive forms

Reactive forms enable you to use a model-driven approach for handling form inputs with changing values. With these forms, you can use multiple controls in a form group, validate form values, and construct forms in which you can dynamically modify controls. This is possible because form data is returned as an immutable, observable stream rather than a mutable data point as with template-driven forms.

You can import this module with the following:

How to Implement File Upload in Angular 9: Quick Tutorial

If you’re ready to try implementing file uploads with your Angular application you can try the following tutorial which uses the FormData and the HttpClientModule. This tutorial is adapted from a longer tutorial by Ahmed Bouchefra.

To get started with this tutorial, you’ll need the following:

1. Create New App and Start Development Server

To get started, you need to first create an application to handle uploads with. You can create a new application by entering the following into your terminal:

When you create this, you need to specify whether to add Angular routing (yes) and your stylesheet format (CSS).

Next, you need to start a local development server from your terminal:

This will start a server and return the local host address. Open the returned site in your browser to see your application.

2. Set up HttpClientModule

Initialize your project through the Angular CLI and import the HttpClientModule. To do this, you need to open your src/app/app.module.ts file. You can do this with the following:

3. Add Control Components and UI

To add UI control components you need to create a home and about components. You can add these in the terminal with the following:

To finish the UI, you can either create components manually or use additional modules like Angular Material. Whichever method you choose, you need to at least define your uploadFile() method and provide a button or submission method for your user.

You then need to add your components to your router via the following: src/app/app-routing.module.ts file.

4. Create Your Upload Service

First, you need to create your service with:

In the src/app/upload.service.ts file, add your imports and inject your HTTP client:

You also need to add your upload method which allows you to call the post method and send data to your upload server. 

5. Define Your Upload Method

Once your service is created you need to define your upload method and add error handling. This is also where you can add progress bar elements and change your UI styling if you wish. 

In the src/app/home/home.component.ts file, add your imports.

Now you can define your method and variables, and inject your upload service.

To enable users to submit files, you should also define an onClick() method to be tied to your submit button. 

You can now test your application via the local browser to ensure that it functions as expected. 

Conclusion

Hopefully, you now have enough information to experiment with Angular file upload components. If you’re new to the process, make sure to experiment in a test environment, or any place where you can safely learn from your mistakes. Continue experimenting and learning different types of approaches, until you’ll find a mix of methods that work best for you. 



Source link