What's the Backup Plan for Your WordPress Site?
Strategy

What’s the Backup Plan for Your WordPress Site?


Of all the reasons we love and use Jetpack for CSS-Tricks—a poster child WordPress site—is that we can sleep easy at night knowing we have real-time backups running with Jetpack Backup. That way, no matter what, everything that makes this site tick, from all the template files to every single word we’ve ever typed, is always a click away from being restored if we need it for any reason at all.

There’s really no question whether or not you should be backing up your WordPress site. You absolutely should. It’s sort of like being prepared for an earthquake: you know it could happen at any time, so you want to make sure you’ve got all the tooling in place to keep things safe, not if, but when it happens.

What’s your backup plan? For us, it’s logging into WordPress.com, locating which backup to use, and clicking a button to restore things to that point in time. That’s all the files of course, like WordPress itself, the theme, and plugins, but also the entire database and all the media files.

Another reason we love Jetpack Backup? It provides a complete activity log of all the changes that happen on the site. It’s one thing to have your site backed up and be able to restore it. It’s another to know what caused the issue in the first place.

Jetpack Backup offers two plans one for daily backups and the other for real-time backups. We’ve got real-time running around here and that’s a great option for large sites that are updated often, like e-commerce. Most sites can probably get away with daily backups instead.

That leads to another wonderful thing: Jetpack Backup is sold à la carte. That means you can just get backups if that’s all you want from Jetpack. And, hey, if you find yourself needing more from Jetpack, like all the stuff we use it for here, then that rich feature set is just a couple of clicks away.

Not sure what your WordPress backup plan is? You really ought to check out Jetpack Backup. It works extremely well for us and we can’t recommend it enough.


True Story

While working on a bit of content for @css across team members, I noticed some of what I had written had disappeared. It got saved over by accident. I forgot to turn on “Revisions” for this Custom Post Type (it was a newsletter, which we write in WordPress).

It was tempting to be like, “Oh well, I’m a dummy, I’ll just have to remember and re-write it.” But no! I have Jetpack real-time backups on this site. I was able to find the exact moment I made my changes and download a copy of the site at that moment.

I didn’t need to restore the site to that point, just what I had written. So, I loaded up the wp_posts table from the SQL dump in that backup, plucked out my writing, and put it back in place.

And of course, I enabled revisions for that Custom Post Type so it won’t happen again.

Not only is that a true story, but this is a Jetpack double-whammy, because I “unrolled” that Twitter thread right here in WordPress via a Jetpack feature.





Source link

A card component where the persons name, photo, and number of posts are displayed to the left in a single column with a light red background, and the bio is displayed on the right in paragraph form with a title against a white background.
Strategy

React Component Tests for Humans


React component tests should be interesting, straightforward, and easy for a human to build and maintain.

Yet, the current state of the testing library ecosystem is not sufficient to motivate developers to write consistent JavaScript tests for React components. Testing React components—and the DOM in general—often require some kind of higher-level wrapper around popular testing frameworks like Jest or Mocha.

Here’s the problem

Writing component tests with the tools available today is boring, and even when you get to writing them, it takes lots of hassle. Expressing test logic following a jQuery-like style (chaining) is confusing. It doesn’t jive with how React components are usually built.

The Enzyme code below is readable, but a bit too bulky because it uses too many words to express something that is ultimately simple markup.

expect(screen.find(".view").hasClass("technologies")).to.equal(true);
expect(screen.find("h3").text()).toEqual("Technologies:");
expect(screen.find("ul").children()).to.have.lengthOf(4);
expect(screen.contains([
  <li>JavaScript</li>,
  <li>ReactJs</li>,
  <li>NodeJs</li>,
  <li>Webpack</li>
])).to.equal(true);
expect(screen.find("button").text()).toEqual("Back");
expect(screen.find("button").hasClass("small")).to.equal(true);

The DOM representation is just this:

<div className="view technologies">
  <h3>Technologies:</h3>
  <ul>
    <li>JavaScript</li>
    <li>ReactJs</li>
    <li>NodeJs</li>
    <li>Webpack</li>
  </ul>
  <button className="small">Back</button>
</div>

What if you need to test heavier components? While the syntax is still bearable, it doesn’t help your brain grasp the structure and logic. Reading and writing several tests like this is bound to wear you out—it certainly wears me out. That’s because React components follow certain principles to generate HTML code at the end. Tests that express the same principles, on the other hand, are not straightforward. Simply using JavaScript chaining won’t help in the long run.

There are two main issues with testing in React:

  • How to even approach writing tests specifically for components
  • How to avoid all the unnecessary noise

Let’s further expand those before jumping into the real examples.

Approaching React component tests

A simple React component may look like this:

function Welcome(props) {
  return <h1>Hello, {props.name}</h1>;
}

This is a function that accepts a props object and returns a DOM node using the JSX syntax.

Since a component can be represented by a function, it is all about testing functions. We need to account for arguments and how they influence the returned result. Applying that logic to React components, the focus in the tests should be on setting up props and testing for the DOM rendered in the UI. Since user actions like mouseover, click, typing, etc. may also lead to UI changes, you will need to find a way to programmatically trigger those too.

Hiding the unnecessary noise in tests

Tests require a certain level of readability achieved by both slimming the wording down and following a certain pattern to describe each scenario.

Component tests flow through three phases:

  1. Preparation (setup): The component props are prepared.
  2. Render (action): The component needs to render its DOM to the UI before either triggering any actions on it or testing for certain texts and attributes. That’s when actions can be programmatically triggered.
  3. Validation (verify): The expectations are set, verifying certain side effects over the component markup.

Here is an example:

it("should click a large button", () => {
  // 1️⃣ Preparation
  // Prepare component props
  props.size = "large";

  // 2️⃣ Render
  // Render the Button's DOM and click on it
  const component = mount(<Button {...props}>Send</Button>);
  simulate(component, { type: "click" });

  // 3️⃣ Validation
  // Verify a .clicked class is added 
  expect(component, "to have class", "clicked");
});

For simpler tests, the phases can merge:

it("should render with a custom text", () => {
  // Mixing up all three phases into a single expect() call
  expect(
    // 1️⃣ Preparation
    <Button>Send</Button>, 
    // 2️⃣ Render
    "when mounted",
    // 3️⃣ Validation
    "to have text", 
    "Send"
  );
});

Writing component tests today

Those two examples above look logical but are anything but trivial. Most of the testing tools do not provide such a level of abstraction, so we have to handle it ourselves. Perhaps the code below looks more familiar.

it("should display the technologies view", () => {
  const container = document.createElement("div");
  document.body.appendChild(container);
  
  act(() => {
    ReactDOM.render(<ProfileCard {...props} />, container);
  });
  
  const button = container.querySelector("button");
  
  act(() => {
    button.dispatchEvent(new window.MouseEvent("click", { bubbles: true }));
  });
  
  const details = container.querySelector(".details");
  
  expect(details.classList.contains("technologies")).toBe(true);
  expect(details.querySelector("h3").textContent, "to be", "Technologies");
  expect(details.querySelector("button").textContent, "to be", "View Bio");
});

Compare that with the same test, only with an added layer of abstraction:

it("should display the technologies view", () => {
  const component = mount(<ProfileCard {...props} />);

  simulate(component, {
    type: "click",
    target: "button",
  });

  expect(
    component,
    "queried for first",
    ".details",
    "to exhaustively satisfy",
    <div className="details technologies">
      <h3>Technologies</h3>
      <div>
        <button>View Bio</button>
      </div>
    </div>
  );
});

It does look much better. Less code, obvious flow, and more DOM instead of JavaScript. This is not a fiction test, but something you can achieve with UnexpectedJS today.

The following section is a deep dive into testing React components without getting too deep into UnexpectedJS. Its documentation more than does the job. Instead, we’ll focus on usage, examples, and possibilities.

Writing React Tests with UnexpectedJS

UnexpectedJS is an extensible assertion toolkit compatible with all test frameworks. It can be extended with plugins, and some of those plugins are used in the test project below. Probably the best thing about this library is the handy syntax it provides to describe component test cases in React.

The example: A Profile Card component

The subject of the tests is a Profile card component.

A card component where the persons name, photo, and number of posts are displayed to the left in a single column with a light red background, and the bio is displayed on the right in paragraph form with a title against a white background.

And here is the full component code of ProfileCard.js:

// ProfileCard.js
export default function ProfileCard({
  data: {
    name,
    posts,
    isOnline = false,
    bio = "",
    location = "",
    technologies = [],
    creationDate,
    onViewChange,
  },
}) {
  const [isBioVisible, setIsBioVisible] = useState(true);

  const handleBioVisibility = () => {
    setIsBioVisible(!isBioVisible);
    if (typeof onViewChange === "function") {
      onViewChange(!isBioVisible);
    }
  };

  return (
    <div className="ProfileCard">
      <div className="avatar">
        <h2>{name}</h2>
        <i className="photo" />
        <span>{posts} posts</span>
        <i className={`status ${isOnline ? "online" : "offline"}`} />
      </div>
      <div className={`details ${isBioVisible ? "bio" : "technologies"}`}>
        {isBioVisible ? (
          <>
            <h3>Bio</h3>
            <p>{bio !== "" ? bio : "No bio provided yet"}</p>
            <div>
              <button onClick={handleBioVisibility}>View Skills</button>
              <p className="joined">Joined: {creationDate}</p>
            </div>
          </>
        ) : (
          <>
            <h3>Technologies</h3>
            {technologies.length > 0 && (
              <ul>
                {technologies.map((item, index) => (
                  <li key={index}>{item}</li>
                ))}
              </ul>
            )}
            <div>
              <button onClick={handleBioVisibility}>View Bio</button>
              {!!location && <p className="location">Location: {location}</p>}
            </div>
          </>
        )}
      </div>
    </div>
  );
}

We will work with the component’s desktop version. You can read more about device-driven code split in React but note that testing mobile components is still pretty straightforward.

Setting up the example project

Not all tests are covered in this article, but we will certainly look at the most interesting ones. If you want to follow along, view this component in the browser, or check all its tests, go ahead and clone the GitHub repo.

## 1. Clone the project:
git clone [email protected]:moubi/profile-card.git

## 2. Navigate to the project folder:
cd profile-card

## 3. Install the dependencies:
yarn

## 4. Start and view the component in the browser:
yarn start

## 5. Run the tests:
yarn test

Here’s how the <ProfileCard /> component and UnexpectedJS tests are structured once the project has spun up:

/src
  └── /components
      ├── /ProfileCard
      |   ├── ProfileCard.js
      |   ├── ProfileCard.scss
      |   └── ProfileCard.test.js
      └── /test-utils
           └── unexpected-react.js

Component tests

Let’s take a look at some of the component tests. These are located in src/components/ProfileCard/ProfileCard.test.js. Note how each test is organized by the three phases we covered earlier.

  1. Setting up required component props for each test.
beforeEach(() => {
  props = {
    data: {
      name: "Justin Case",
      posts: 45,
      creationDate: "01.01.2021",
    },
  };
});

Before each test, a props object with the required <ProfileCard /> props is composed, where props.data contains the minimum info for the component to render.

  1. Render with a default set of props.

This test checks the whole DOM produced by the component when passing name, posts, and creationDate fields.

Here’s what the result produces in the UI:

And here’s the test case for it:

it("should render default", () => {
  // "to exhaustively satisfy" ensures all classes/attributes are also matching
  expect(
    <ProfileCard {...props} />,
    "when mounted",
    "to exhaustively satisfy",
    <div className="ProfileCard">
      <div className="avatar">
        <h2>Justin Case</h2>
        <i className="photo" />
        <span>45{" posts"}</span>
        <i className="status offline" />
      </div>
      <div className="details bio">
        <h3>Bio</h3>
        <p>No bio provided yet</p>
        <div>
          <button>View Skills</button>
          <p className="joined">{"Joined: "}01.01.2021</p>
        </div>
      </div>
    </div>
  );
});
  1. Render with status online.

Now we check if the profile renders with the “online” status icon.

And the test case for that:

it("should display online icon", () => {
  // Set the isOnline prop
  props.data.isOnline = true;

  // The minimum to test for is the presence of the .online class
  expect(
    <ProfileCard {...props} />,
    "when mounted",
    "queried for first",
    ".status",
    "to have class",
    "online"
  );
});
  1. Render with bio text.

<ProfileCard /> accepts any arbitrary string for its bio.

So, let’s write a test case for that:

it("should display online icon", () => {
  // Set the isOnline prop
  props.data.isOnline = true;

  // The minimum to test for is the presence of the .online class
  expect(
    <ProfileCard {...props} />,
    "when mounted",
    "queried for first",
    ".status",
    "to have class",
    "online"
  );
});
  1. Render “Technologies” view with an empty list.

Clicking on the “View Skills” link should switch to a list of technologies for this user. If no data is passed, then the list should be empty.

Here’s that test case:

it("should display the technologies view", () => {
  // Mount <ProfileCard /> and obtain a ref
  const component = mount(<ProfileCard {...props} />);

  // Simulate a click on the button element ("View Skills" link)
  simulate(component, {
    type: "click",
    target: "button",
  });

  // Check if the .details element contains the technologies view
  expect(
    component,
    "queried for first",
    ".details",
    "to exhaustively satisfy",
    <div className="details technologies">
      <h3>Technologies</h3>
      <div>
        <button>View Bio</button>
      </div>
    </div>
  );
});
  1. Render a list of technologies.

If a list of technologies is passed, it will display in the UI when clicking on the “View Skills” link.

Yep, another test case:

it("should display list of technologies", () => {
  // Set the list of technologies
  props.data.technologies = ["JavaScript", "React", "NodeJs"];
 
  // Mount ProfileCard and obtain a ref
  const component = mount(<ProfileCard {...props} />);

  // Simulate a click on the button element ("View Skills" link)
  simulate(component, {
    type: "click",
    target: "button",
  });

  // Check if the list of technologies is present and matches prop values
  expect(
    component,
    "queried for first",
    ".technologies ul",
    "to exhaustively satisfy",
    <ul>
      <li>JavaScript</li>
      <li>React</li>
      <li>NodeJs</li>
    </ul>
  );
});
  1. Render a user location.

That information should render in the DOM only if it was provided as a prop.

The test case:

it("should display location", () => {
  // Set the location 
  props.data.location = "Copenhagen, Denmark";

  // Mount <ProfileCard /> and obtain a ref
  const component = mount(<ProfileCard {...props} />);
  
  // Simulate a click on the button element ("View Skills" link)
  // Location render only as part of the Technologies view
  simulate(component, {
    type: "click",
    target: "button",
  });

  // Check if the location string matches the prop value
  expect(
    component,
    "queried for first",
    ".location",
    "to have text",
    "Location: Copenhagen, Denmark"
  );
});
  1. Calling a callback when switching views.

This test does not compare DOM nodes but does check if a function prop passed to <ProfileCard /> is executed with the correct argument when switching between the Bio and Technologies views.

it("should call onViewChange prop", () => {
  // Create a function stub (dummy)
  props.data.onViewChange = sinon.stub();
  
  // Mount ProfileCard and obtain a ref
  const component = mount(<ProfileCard {...props} />);

  // Simulate a click on the button element ("View Skills" link)
  simulate(component, {
    type: "click",
    target: "button",
  });

  // Check if the stub function prop is called with false value for isBioVisible
  // isBioVisible is part of the component's local state
  expect(
    props.data.onViewChange,
    "to have a call exhaustively satisfying",
    [false]
  );
});

Running all the tests

Now, all of the tests for <ProfileCard /> can be executed with a simple command:

yarn test

Notice that tests are grouped. There are two independent tests and two groups of tests for each of the <ProfileCard /> views—bio and technologies. Grouping makes test suites easier to follow and is a nice way to organize logically-related UI units.

Some final words

Again, this is meant to be a fairly simple example of how to approach React component tests. The essence is to look at components as simple functions that accept props and return a DOM. From that point on, choosing a testing library should be based on the usefulness of the tools it provides for handling component renders and DOM comparisons. UnexpectedJS happens to be very good at that in my experience.

What should be your next steps? Look at the GitHub project and give it a try if you haven’t already! Check all the tests in ProfileCard.test.js and perhaps try to write a few of your own. You can also look at src/test-utils/unexpected-react.js which is a simple helper function exporting features from the third-party testing libraries.

And lastly, here are a few additional resources I’d suggest checking out to dig even deeper into React component testing:



Source link

An Overview of Lambda Architecture
Strategy

An Overview of Lambda Architecture


Photo by Atanas Malamov on Unsplash

The amount of data produced in this world has exploded. Whether it’s from the Internet of Things (IoT), social media, app metrics, or device analytics, the amount of data that can be processed and analyzed at any given moment is staggering. Big data is on the rise, and data systems are tasked with handling it. But this begs the question: Are these systems up for the task?

The Challenges of Big Data

Data systems are most useful when they correctly answer the right questions about all the data they contain. With big data, however, new data comes into the system every minute — and there’s a lot of it. The data system faces two fundamental challenges.

Challenge #1: The Latency Problem. I can get super-precise answers to your questions based on all the data streamed from the beginning of my existence until this very moment, but that will take me awhile. I hope you don’t mind waiting.

Challenge #2: The Accuracy Problem. I’ve finally got my super-precise answers for you. Sadly, they took me so long to acquire, they’re no longer up-to-date and accurate.

Introducing Lambda Architecture

Lambda Architecture is a big data paradigm, a way of structuring a data system to overcome the latency problem and the accuracy problem. It was first introduced by Nathan Marz and James Warren in 2015. Lambda Architecture structures the system in three layers: a batch layer, a speed layer, and a serving layer. We will talk about these in detail shortly. 

Lambda Architecture is horizontally scalable. This means that if your data set becomes too large or the data views you need are too numerous, all you need to do is add more machines. It also confines the most complex part of the system in the speed layer, where the outputs are temporary and can be discarded (every few hours) if ever there is a need for refinement or correction.

How Does Lambda Architecture Work?

As mentioned above, Lambda Architecture is made up of three layers. The first layer — the batch layer — stores the entire data set and computes batch views. The stored data set is immutable and append-only. New data is continually streamed in and appended to the data set, but old data will always remain unchanged. The batch layer also computes batch views, which are queries or functions on the entire data set. These views can subsequently be queried for low-latency answers to questions of the entire data set. The drawback, however, is that it takes a lot of time to compute these batch views.

The second layer in Lambda Architecture is the serving layer. The serving layer loads in the batch views and, much like a traditional database, allows for read-only querying on those batch views, providing low-latency responses. As soon as the batch layer has a new set of batch views ready, the serving layer swaps out the now-obsolete set of batch views for the current set.

The third layer is the speed layer. The data that streams into the batch layer also streams into the speed layer. The difference is that while the batch layer keeps all of the data since the beginning of its time, the speed layer only cares about the data that has arrived since the last set of batch views completed. The speed layer makes up for the high-latency in computing batch views by processing queries on the most recent data that the batch views have yet to take into account.

The Three Layers — An Analogy

Consider the elderly man who lives alone in a giant mansion. Every room of his mansion has a clock, but except for the clock in the kitchen, all of the clocks  are wrong. The man decides one day that he will set all of the clocks, using the kitchen clock as the correct time. But, his memory is poor, so he writes down the current time on the kitchen clock (9:04 AM) on a piece of paper. He begins his slow walk around the mansion setting all of his clocks to 9:04 AM. What happens? By the time he gets to the very last clock in the guest bedroom of the east wing, it is 9:51 AM. He sets this clock too, as he has done with all of the other clocks, to the time written on his paper — 9:04 AM. No wonder all of his clocks are wrong!

This is the problem we would experience if a data system only had a batch layer. The answers to the questions we’re asking would no longer be up-to-date because it took so long to get those answers.

Fortunately, the man remembers that he has an old runner’s stopwatch. The next day, he starts again in the kitchen at 9:04 AM. He writes down the time on a piece of paper. He starts his stopwatch — his speed layer — and begins walking around his mansion. Now, when he gets to the very last clock in the east wing, he sees the paper with 9:04 AM written, and he sees his stopwatch says “47 minutes and 16 seconds”. With a little bit of math, he knows to set this last clock to 9:51 AM.

In this analogy — which, admittedly, is not perfect — the man is the serving layer. He takes the batch view (the paper with “9:04 AM” written on it) around his mansion to answer “What time is it?” But, he also does the additional work of reconciling the batch view with the speed layer to get the most accurate answer possible.

Why Use Lambda Architecture?

In Marz and Warren’s seminal book on Lambda Architecture Big Data, they list eight desirable properties in a big data system, describing how Lambda Architecture satisfies each one:

  1. Robustness and fault tolerance. Because the batch layer is designed to be append-only, containing the entire data set since the beginning of time, the system is human-fault tolerant. If there is any data corruption, then all of the data from the point of corruption forward can be deleted and replaced with correct data. Batch views can be swapped out for completely recomputed ones. The speed layer can be discarded. In the time it takes to generate a new set of batch views, the entire system can be reset and running again.

  2. Scalability. Lambda Architecture is designed with layers built as distributed systems. By simply adding more machines, end users can easily horizontally scale those systems. 

  3. Generalization. Since Lambda Architecture is a general paradigm, adopters aren’t locked into a specific way of computing this or that batch view. Batch views and speed layer computations can be designed to meet the specific needs of the data system.

  4. Extensibility. As new types of data enter the data system, new views will become necessary. Data systems are not locked into certain kinds or a certain number of batch views. New views can be coded and added to the system, with the only constraint being resources, which are easily scalable. 

  5. Ad hoc queries. If necessary, the batch layer can support ad hoc queries that were not available in the batch views. Assuming the high-latency for these ad hoc queries is permissible, then the batch layer’s usefulness is not restricted only to the batch views it generates.

  6. Minimal maintenance. Lambda Architecture, in its typical incarnation, uses Apache Hadoop for the batch layer and ElephantDB for the serving layer. Both are fairly simple to maintain.

  7. Debuggability. The inputs to the batch layer’s computation of batch views are always the same: the entire data set. In contrast to debugging views computed on a snapshot of a stream of data, the inputs and outputs for each layer in Lambda Architecture are not moving targets, vastly simplifying the debugging of computations and queries.

  8. Low latency reads and updates. In Lambda Architecture, the last property of a big data system is fulfilled by the speed layer, which offers real-time queries of the latest data set.

The Disadvantages of Lambda Architecture

While the advantages of Lambda Architecture seem numerous and straightforward, there are some disadvantages to keep in mind. First and foremost, cost will become a consideration. While how to scale is not very complex — just add more machines — we can see that the batch layer will necessarily need to expand and grow over time. Since all data is append-only and no data in the batch layer is discarded, the cost of scaling will necessarily grow with time.

Others have noted the challenge of maintaining two separate sets of code to compute views for the batch layer and the speed layer. Both layers operate on the same set—or, in the case of the speed layer, subset—of data, and the questions asked of both layers are similar. However, because the two layers are built on completely different systems (for example, Hadoop or Snowflake for the batch layer, but Storm or Spark for the speed layer), code maintenance for two separate systems can be complicated.

Lambda Architecture in Machine Learning

In the field of machine learning, there’s no doubt that more data is better. For machine learning to apply algorithms or detect patterns, however, it needs to receive its data in a way that makes sense. Rather than receiving data from different directions without any semblance of structure, machine learning can benefit by processing data through a Lambda Architecture data system first. From there, machine learning algorithms can ask questions and begin to make sense of the data that enters the system.

Lambda Architecture for IoT

While machine learning might be on the output side of a Lambda Architecture, IoT might very well be on the input side of the data system. Imagine a city of millions of automobiles, each one equipped with sensors to send data on weather, air quality, traffic, location information, driving habits, and so on. This is the massive stream of data that would be fed into the batch layer and speed layer of a Lambda Architecture. IoT devices are a perfect example of providing the data in big data.

Stream Processing and Lambda Architecture Challenges

We noted above that “the speed layer only cares about the data that has arrived since the completion of the last set of batch views.” While this is true, it’s important to make a clarification: that small subset of data is not stored; it is processed immediately as it streams in, and then it is discarded. The speed layer is also referred to as the “stream-processing layer.” Remember that the goal of the speed layer is to provide low-latency, real-time views of the most recent data, the data that the batch views have yet to take into account.

On this point, the original authors of the Lambda Architecture refer to “eventual accuracy,” noting that the batch layer strives for exact computation while the speed layer strives for approximate computation. The approximate computation of the speed layer will eventually be replaced by the next set of batch views, moving the system towards “eventual accuracy.”

Processing the stream in real-time in order to produce views that are constantly updated as new data streams in (on the order of milliseconds) is an incredibly complex task. Partnering a document-based database with an indexing and querying system is often recommended in these cases. 

Differences Between Lambda Architecture and Kappa Architecture

We noted above that a considerable disadvantage of Lambda Architecture is maintaining two separate code bases to handle similar processing since the batch layer and the speed layer are different distributed systems. Kappa Architecture seeks to address this concern by removing the batch layer altogether.

Instead, both the real-time views computed from recent data and the batch views computed from all data are performed within a single stream processing layer. The entire data set — the append-only log of immutable data — streams through the system quickly in order to produce the views with the exact computations. Meanwhile, the original “speed layer” tasks from Lambda Architecture are retained in Kappa Architecture, still providing the low-latency views with approximate computations.

This difference in Kappa Architecture allows for the maintaining of a single system for generating views, which simplifies the system’s code base considerably.

Lambda Architecture via Containers on Heroku

Coordinating and deploying the various tools needed to support a Lambda Architecture — especially when you’re in the starting up and experimenting stage — is accomplished easily with Docker.

Heroku serves well as a container-based cloud platform-as-a-service (PaaS), allowing you to deploy and scale your applications with ease. For the batch layer, you would likely deploy a docker container for Apache Hadoop. As the speed layer, you might consider deploying Apache Storm or Apache Spark. Lastly, for the serving layer, you could deploy docker containers for Apache Cassandra or MongoDB, coupled with indexing and querying by Elasticsearch.

Conclusion

Taking on the task of big data is not for the faint of heart. Scaling and system robustness are huge challenges, so paradigms like Lambda Architecture bring excellent guidance. As massive amounts of data stream into the data system, the batch layer provides high-latency accuracy, while the speed layer provides a low-latency approximation. Meanwhile, the speed layer responds to queries by reconciling these two views to provide the best possible response. Implementing a Lambda Architecture data system is not trivial. While perhaps complex and initially intimidating, the tools are available and ready to be deployed.



Source link

r/graphic_design - Help Exporting Sparkle Overlays in Adobe Illustrator so Don
Strategy

Help Exporting Sparkle Overlays in Adobe Illustrator so Don’…


Hi, I’m trying to export some very sparkly photo collages I created in Adobe Illustrator, but when I export them as PNGs/JPGs, they lose a lot of sparkle (attaching the low-sparkle version and the original version). When I export as an SVG, the sparkles’ integrity remains as I created them. However, I need a JPG for a FB cover photo, I believe. Could someone please advise if there are simple checkboxes I’m missing when exporting to maintain the glitter overlay effects? Thanks!

r/graphic_design - Help Exporting Sparkle Overlays in Adobe Illustrator so Don't Lose Sparkle as PNGs/JPGs

High sparkle as created

r/graphic_design - Help Exporting Sparkle Overlays in Adobe Illustrator so Don't Lose Sparkle as PNGs/JPGs

Low sparkle as exported



Source link

r/graphic_design - Stretched text effect
Strategy

Stretched text effect : graphic_design


Hi,

I’m trying to mimic this text for a friend but having trouble with the stretch.

I want to replace ‘Dublin’ with ‘St Andrews’ but I’m unsure how to create the stretch at the bottom of the type?

I was thinking of warping the type with arch or arc lower (using a negative value) and then liquifying the text at the bottom, but the liquify looks sloppy. I think my problem has more to do with the letter ‘S’ having a less angular shape.

How can I stretch the type but also make it arched?

Thanks for any help, folks!

r/graphic_design - Stretched text effect



Source link