r/webdev - Guidance with enabling users to post on my website, (styling text, embedding images)
Strategy

Guidance with enabling users to post on my website, (styling…


Hello guys, I want to develop a simple front end for a website where users can write, change text fonts and size as they write, and embed images within their text. I basically want to enable them to make posts on my website. Similar to how I am filling out a text box right now (as I am posting this question) on Reddit, enabling bold and italic texts, or embedding images:

r/webdev - Guidance with enabling users to post on my website, (styling text, embedding images)

How do I make something similar, but simpler, for my website?

I am self teaching myself web-app development and so far have basic front and back end dev skills only. I require a simple solution and some guidance with this please.

Hope my question is clear. The link to the picture should clarify what I need. Thanks!



Source link

HTML for Subheadings and Headings
Strategy

HTML for Subheadings and Headings


Let’s say you have a double heading situation going on. A little one on top of a big one. It comes up, I dunno, a billion times a day, I’d say. What HTML do you go for? Dare I say, it depends? But have you considered all the options? And how those options play out semantically and accessibility-y?

As we do around here sometimes, let’s take a stroll through the options.

The visual examples

Let’s assume these are not the <h1> on the page. Not that things would be dramatically different if one of them was, but just to set the stage. I find this tends to come up in subsections or cards the most.

Here’s an example a friend brought up in a conversation the other day:

Here’s one that I worked on also just the other day:

And here’s a classic card:

Option 1: The ol’ <h3> then <h2>

The smaller one is on the top, and the bigger one is below, and obviously <h3> is always smaller than an <h2> right?

<h3>Subheading</h3>
<h2>Heading</h2>

This is probably pretty common thinking and my least favorite approach.

I’d rather see class names in use so that the styling is isolated to those classes.

<h3 class="card-subhead">Subheading</h3>
<h2 class="card-head">Heading</h2>

Don’t make semantic choices, particularly those that affect accessibility, based on this purely visual treatment.

The bigger weirdness is using two heading elements at all. Using two headings for a single bit of content doesn’t feel right. The combination feels like: “Hey here’s a major new section, and then here’s another subheading because there will be more of them in this subsection so this one is just for this first subsection.” But then there are no other subsections and no other subheadings. Even if that isn’t weird, the order is weird with the subsection header coming first.

If you’re going to use two different heading elements, it seems to me the smaller heading is almost more likely to make more sense as the <h2>, which leads us to…

Option 2: Small ‘n’ mighty <h2> and <h3>

If we’ve got classes in place, and the subheading works contextually as the more dominant heading, then we can do this:

<h2 class="card-subheading">Subheading</h2>
<h3 class="card-heading">Heading</h3>

Just because that <h2> is visually smaller doesn’t mean it still can’t be the dominant heading in the document outline. If you look at the example from CodePen above, the title “Context Switching” feels like it works better as a heading than the following sentence does. I think using the <h2> on that smaller header works fine there, and keeps the structure more “normal” (I suppose) with the <h2> coming first.

Still, using two headings for one section still feels weird.

Option 3: One header, one div

Perhaps only one of the two needs to be a heading? That feels generally more correct to me. I’ve definitely done this before:

<div class="card-subheading">Subheading</div>
<h3 class="card-heading">Heading</h3>

That feels OK, except for the weirdness that the subhead might have relevant content and people could potentially miss that if they navigated to the head via screenreader and read from there forward. I guess you could swap the visual order…

<hgroup> <!-- hgroup is deprecated, just defiantly using it anyway -->

  <h3 class="card-heading">Heading</h3>
  <div class="card-subheading">Subheading</div>

</hgroup>
hgroup {
  display: flex;
  flex-direction: column;
}
hgroup .card-subheading {
  /* Visually, put on top, without affecting source order */
  order: -1;
}

But I think messing with visual order like that is generally a no-no, as in, awkward for sighted screenreader users. So don’t tell anybody I showed you that.

Option 4: Keep it all in one heading

Since we’re only showing a heading for one bit of content anyway, it seems right to only use a single header.

<h2>
  <strong>Subheading</strong>
  Heading
</h2>

Using the <strong> element in there gives us a hook in the CSS to do the same type of styling. For example…

h2 strong {
  display: block;
  font-size: 75%;
  opacity: 0.75;
}

The trick here is that the headings then need to work/read together as one. So, either they read together naturally, or you could use a colon : or something.

<h2>
  <strong>New Podcast:</strong>
  Struggling with Basic HTML
</h2>

ARIA Role

It turns out that there is an ARIA role dedicated to subtitles:

So like:

<h2 class="card-heading">Heading</h2>
<div role="doc-subtitle">Subheading</div>

I like styles based on ARIA roles in general (because it demands their proper use), so the styling could be done directly with it:

[role="doc-subtitle"] { }

Testing from Steve and Léonie suggest that browsers generally treat it as a “heading without a level.” JAWS is the exception, which treats it like an <h2>. That seems OK… maybe? Steve even thinks putting the subheading first is OK.

The bad and the ugly

What’s happening here is the subheading is providing general context to the heading — kinda like labelling it:

<label for="card-heading-1">Subheading</label>
<h2 id="card-heading-1" class="card-heading">Heading</h2>

But we’re not dealing in form elements here, so that’s not recommended. Another way to make it a single heading would be to use a pseudo-element to place the subhead, like:

<h2 class="card-heading" data-subheading="Subheading">Heading</h2>
.card-head::before {
  content: attr(data-subheading);
  display: block;
  font-size: 75%;
  opacity: 0.75;
}

It used to be that screen readers ignored pseudo-content, but it’s gotten better, though still not perfect. That makes this only slightly more usable, but the text is still un-selectable and not on-page-findable, so I’d rather not go here.





Source link

texting ecommerce
Strategy

Using Twilio and Corvid: Simple SMS Integration for Your Web…


Text messaging is the latest trend in e-commerce sites. It enables you to directly talk with your consumer and share new products or shipping updates. Twilio is a great tool that enables us as developers to add text messaging to our apps with minimal headaches. At Corvid by Wix, we’re all about increasing dev velocity, so with Twilio’s npm module, we can add text messaging to our website in less than 10 minutes. Don’t believe me? Let’s take a look. 

texting ecommerce

The first thing you need to do is sign up for a Twilio trial account. This is free, and it will provide you with 3 crucial details: your account SID, your auth token, and your Twilio phone number. While you are there, it’s a good idea to also add your phone number as a verified number as free trial accounts can only send to numbers on Twilio’s verified list. 

verifying active numbers

Now let’s open up Corvid by Wix. If you don’t already have a Wix site, it’s easy to get started. All you need to do is sign up for a free Wix account, create a site from a template (or blank if you prefer!), and enable Corvid. To enable Corvid, in your Wix Editor, select Dev Mode from the top menu bar and Turn on Dev Mode

turning on dev mode

Add a button to your site from the (+) icon. This button will be how to initialize the Twilio text message. Add an onClick event to this button through the properties panel. When you click on a UI Element on the page, the properties pane loads with the details of that element. Go to the Events section, click the (+) next to onClick, and then hit Enter. You’ll see a new stubbed out onClick event listener in your code editor.

screenshot of above step

Inside the new onClick event listener, let’s add a new function call to sendSMS()

Now the sendSms() function has to come from somewhere, so let’s have it come from the backend code so no one can get access to our secrets. To do this in Corvid, all you have to do is import the function from the backend code at the top of the UI code editor like so:

Of course, this means we have to have a twilio.jsw file in our backend code files, so let’s create one now. If you do to the Backend section in the Site Structure pane, you can add a new JavaScript Module named twilio.jsw.

In your new Twilio file, we need to create an exported function so it is available to be imported by the UI (or any other code that wants to use it). To do so, stub out the new sendSms() function. 

This is where we will do our Twilio call. To do that, we do need to import the Twilio npm helper package. Under node_modules in the site structure, add a new module and search for Twilio. Install the Twilio package. Using Corvid to handle your npm packages means we automatically create and maintain your package.json file for you so your maintenance responsibilities are reduced. 

installing twilio package

To use the Twilio helper package, you’ll need to import it in your backend code. Make sure you are working in your twilio.jsw file, and import twilio in the top of your code.

The next thing we need to do is instantiate a new Twilio client. You need to pass it 2 parameters: your Twilio Account SID and your Twilio Auth Token. You can create new constants for these values:

Now create a new instance of Twilio and pass it these values.

Awesome! Now we can start working with Twilio. Today, we’re just interested in sending a message, so let’s use the Twilio SMS Message Create method. Let’s create a new message which takes in a JSON object with 3 parameters: to, from, and body.

We need to fill in those details. TO will be your personal number. Again make sure it is verified with Twilio if you are working with their free trial. FROM will be your new active number on Twilio. When you created an account, you should have create a phone number that sends messages on behalf of your account. This will always be your from number when using Twilio. And last is your BODY. This can be whatever message you want to send. Maybe today it is just Hello World. So all filled out, it may look something like this:

Lastly, we need to send this to production which is super easy with Corvid. Click on **Publish** in the top right corner, and ta-da! Your site is in production. Click on your SMS button and check to make sure you get a text message. If you prefer to do your testing in QA, you can always use the Preview button instead of Publish.

And that’s how simple it is to work with npm modules in your Corvid site! If you want to see other ways to speed up your web dev, feel free to reach out!



Source link

iOS Swift UICollectionView with Horizontal Pagination
Strategy

iOS Swift UICollectionView with Horizontal Pagination


Hey, today I’m presenting you horizontal pagination in swift 5. The reference I have taken from one of my organization’s library.

If you are new bee then please refer to following links first:

  1. https://developer.apple.com/documentation/uikit/uicollectionview
  2. https://www.raywenderlich.com/9334-uicollectionview-tutorial-getting-started

Horizontal Pagination

Basic pagination works as you pulled to a threshold value. Beyond that, it will hit API and gives some array items back, that you will append to the main array. Pagination consist of 2 things refresh all and load more. Refresh all works when you need fresh content from the beginning and load more works when you need more data at the end. In context with horizontal pagination you will drag collection towards the right to refresh all and in case of load more, you will scroll to last content and drag towards left.

So moving ahead and let’s create a new project name HorizontalPaginationDemo.

Project Setup

  1. Open default Main.Storyboard, there is default view controller in Interface Builder (IB), add UICollectionView with top, left, right and height to 0,0,0 and 200 respectively.
  2. There is a default UICollectionViewCell present in the collection view. Extend it and add UILabel in the centre.
  3. Create an IBOutlet of collection view in default ViewController Class and delegates methods for no. of rows and cell for row.
func setupCollectionView() {
 self.collectionView.delegate = self
 self.collectionView.dataSource = self
} 
  1. For horizontal or vertical pagination we need to specify in which direction collection view will scroll strictly. So for that set collection view property alwaysBounceHorizontal to true.
self.collectionView.alwaysBounceHorizontal = true

5. Add a subclass of UICollectionViewCell name HorizontalCollectionCell and set this as a subclass for the same in IB.

6. Add IBOutlet of the label in above collection cell’s subclass. Add below method. Updating the background colour will differentiate cells.

func update(index: Int) {
 self.titleLabel.text = "(index) Value"
 self.backgroundColor = index % 2 == 0 ? .lightGray : .lightText
} 

7. Coming back to view controller class, add an array variable with an integer type. Initialize the array with some dummy data. Update collections delegate methods as shown below.

extension ViewController: UICollectionViewDelegate,func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int {   let cell = collectionView.dequeueReusableCell(   cell.update(index: indexPath.row)
 UICollectionViewDataSource { 
 return self.items.count
} func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell { 
 withReuseIdentifier: "HorizontalCollectionCell",
 return cell
} 
 for: indexPath
 ) as! HorizontalCollectionCell 

It should look like this on running the project.

Concept

Assuming that reader knows about contentOffset and contentInset.
But actually contentOffset is a CGPoint aka a pointer which tells the content position in frame from X and Y direction and contentInset is Space/Padding you can give either on top and bottom in vertical flow layout else left or right in horizontal flow layout in the collection view.

So depending upon pagination case we will add subview. In that subview there is UIActivityIndicatorView or we can say loader, which will rotate when the view will be visible. As shown below the visuals of refresh all and load more.

pull to refresh i.e. refresh all

load more

Let’s began with creating manager class name HorizontalPaginationManager. This class will handle pagination actions. It would be good practice to add managers to handle some usual actions. It makes code reusable.

Code Initiation

So I’ll explain the step by step

  1. Add above mention variables in your manager class from Line no.19-27.
  2. isLoading tells whether loading is going or not. It helps to avoid duplicate requests.
  3. isObservingKeyPath helps in observing the change in content offset. It helps to check threshold distance from where we call refresh all or load more. In de-initialiser, the observer is removed.
  4. scrollView is taken as a superclass variable Though it is not required. We will add above-mentioned observer on this variable.
  5. leftmost/rightmost views are the views we will add on the basis of action i.e. left one for refresh all and the right one for load more. This view visibility denotes that action is processing.
  6. Rest are colours used as per requirement for customisation.
  7. Add a delegate of type HorizontalPaginationManagerDelegate. This need to get te actions which needed to perform while pagination. This protocol consists of two methods as per the actions required. trailing closures are required in order to remove loader views and reset the pagination actions in order to use it again.
  8. init() aka initialiser required scrollView majorly your collection view.

Now you have to setup leftmost/ rightmost views as above mentioned, In both the case I have added activity indicator. Set scrollView inset left/right respectively on the basis of view width. Also when the action is performed completely, we have to assign nil and remove from scrollView so that we don’t get duplicates.

Observing Offset

Now you have to override observeValue for keyPath method like above. Mention key contentOffset and set condition for .new value so that we will get lastest changed value. On observing we’ll call setContentOffSet as mentioned in above. So in that, there will be 2 conditions.

  1. if offsetX < -100 && !self.isLoading: which means if you dragged beyond -100 the threshold value && it is not loading then you will perform the action for refresh all and you have to set self.isLoading to true so it will be called once. Followed by you will call refresh all delegate method in which it’s callback will reset value for loading. It will remove left most view and self.isLoading to false.
  2. if contentWidth > frameWidth, offsetX > (diffX + 130) && !self.isLoading: which means if you have your content size is greater than you scrollView frame size && dragged beyond 130 the threshold value && it is not loading then you will perform the action for load more and you have to set self.isLoading to true so it will be called once. Followed by you will call load more delegate method in which it’s callback will reset value for loading. It will remove left most view and self.isLoading to false.
  3. The final part is set up pagination in your ViewController class.

Here I called fetchItems method in viewDidLoad as initially, we want to load items initially without pagination actions. Added delay in order to feel like API calls. Like I said earlier refresh all will reset data i.e. sets fresh data and load more append data in the same content.

I hope you like this tutorial. Feel free to reach out! Nickelfox

GitHub: https://github.com/Abhishek9634/HorizontalPaginationDemo 

Happy Coding!



Source link

Building Custom Data Importers: What Engineers Need to Know
Strategy

Building Custom Data Importers: What Engineers Need to Know


Importing data is a common pain-point for engineering teams. Whether its importing CRM data, inventory SKUs, or customer details, importing data into various applications and building a solution for this is a frustrating experience nearly every engineer can relate to. Data import, as a critical product experience is a huge headache. It reduces the time to value for customers, strains internal resources, and takes valuable development cycles away from developing key, differentiating product features.

Frequent error messages end-users receive when importing data. Why do we expect customers to fix this themselves?

Data importers, specifically CSV importers, haven’t been treated as key product features within the software, and customer experience. As a result, engineers tend to dedicate an exorbitant amount of effort creating less-than-ideal solutions for customers to successfully import their data.

Engineers typically create lengthy, technical documentation for customers to review when an import fails. However, this doesn’t truly solve the issue but instead offsets the burden of a great experience from the product to an end-user.

In this article, we’ll address the current problems with importing data and discuss a few key product features that are necessary to consider if you’re faced with a decision to build an in-house solution.

Importing data is typically frustrating for anyone involved at a data-led company. Simply put, there has never been a standard for importing customer data. Until now, teams have deferred to CSV templates, lengthy documentation, video tutorials, or buggy in-house solutions to allow users the ability to import spreadsheets. Companies trying to import CSV data can run into a variety of issues such as:

  • Fragmented data: With no standard way to import data, we get emails going back and forth with attached spreadsheets that are manually imported. As spreadsheets get passed around, there are obvious version control challenges. Who made this change? Why don’t these numbers add up as they did in the original spreadsheet? Why are we emailing spreadsheets containing sensitive data?
  • Improper formatting: CSV import errors frequently occur when formatting isn’t done correctly. As a result, companies often rely on internal developer resources to properly clean and format data on behalf of the client — a process that can take hours per customer, and may lead to churn anyway. This includes correcting dates or splitting fields that need to be healed prior to importing.
  • Encoding errors: There are plenty of instances where a spreadsheet can’t be imported when it’s not improperly encoded. For example, a company may need a file to be saved with UTF-8 encoding (the encoding typically preferred for email and web pages) in order to then be uploaded properly to their platform. Incorrect encoding can result in a lengthy chain of communication where the customer is burdened with correcting and re-importing their data.
  • Data normalization: A lack of data normalization results in data redundancy and a never-ending string of data quality problems that make customer onboarding particularly challenging. One example includes formatting email addresses, which are typically imported into a database, or checking value uniqueness, which can result in a heavy load on engineers to get the validation working correctly.

Remember building your first CSV importer?

When it comes down to creating a custom-built data importer, there are a few critical features that you should include to help improve the user experience. (One caveat – building a data importer can be time-consuming not only to create but also maintain – it’s easy to ensure your company has adequate engineering bandwidth when first creating a solution, but what about maintenance in 3, 6, or 12 months?)

A preview of Flatfile Portal. It integrates in minutes using a few lines of JavaScript.

Data mapping

Mapping or column-matching (they are often used interchangeably) is an essential requirement for a data importer as the file import will typically fail without it. An example is configuring your data model to accept contact-level data. If one of the required fields is “address” and the customer who is trying to import data chooses a spreadsheet where the field is labeled “mailing address,” the import will fail because “mailing address” doesn’t correlate with “address” in a back-end system. This is typically ‘solved’ by providing a pre-built CSV template for customers, who then have to manipulate their data, effectively increasing time-to-value during a product experience. Data mapping needs to be included in the custom-built product as a key feature to retain data quality and improve the customer data onboarding experience.

Auto-column matching CSV data is the bread and butter of Portal, saving massive amounts of time for customers while providing a delightful import experience.

Data validation

Data validation, which checks if the data matches an expected format or value, is another critical feature to include in a custom data importer. Data validation is all about ensuring the data is accurate and is specific to your required data model. For example, if special characters can’t be used within a certain template, error messages can appear during the import stage. Having spreadsheets with hundreds of rows containing validation errors results in a nightmare for customers, as they’ll have to fix these issues themselves, or your team, which will spend hours on end cleaning data. Automatic data validators allow for streamlining of healing incoming data without the need for a manual review.

We built Data Hooks into Portal to quickly normalize data on import. A perfect use-case would be validating email uniqueness against a database.

Data parsing

Data parsing is the process of taking an aggregation of information (in a spreadsheet) and breaking it into discrete parts. It’s the separation of data. In a custom-built data importer, a data parsing feature should not only have the ability to go from a file to an array of discrete data but also streamline the process for customers.

Data transformation

Data transformation means making changes to imported data as it’s flowing into your system to meet an expected or desired value. Rather than sending data back to users with an error message for them to fix, data transformation can make small, systematic tweaks so that the users’ data is more usable in your backend. For example, when transferring a task list, prioritization data could be transformed into a different value, such as numbers instead of labels.

Data Hooks normalize imported customer data automatically using validation rules set in the Portal JSON config. These highly adaptable hooks can be worked to auto-validate nearly any incoming customer data.

We’ve baked all of the above features into Portal, our flagship CSV importer at Flatfile. Now that we’ve reviewed some of the must-have features of a data importer, the next obvious question for an engineer building an in-house importer is typically… should they?

Engineering teams that are taking on this task typically use custom or open source solutions, which may not adhere to specific use-cases. Building a comprehensive data importer also brings UX challenges when building a UI and maintaining application code to handle data parsing, normalization, and mapping. This is prior to considering how customer data requirements may change in future months and the ramifications of maintaining a custom-built solution.

Companies facing data import challenges are now considering integrating a pre-built data importer such as Flatfile Portal. We’ve built Portal to be the elegant import button for web apps. With just a few lines of JavaScript, Portal can be implemented alongside any data model and validation ruleset, typically in a few hours. Engineers no longer need to dedicate hours cleaning up and formatting data, nor do they need to custom build a data importer (unless they want to!). With Flatfile, engineers can focus on creating product-differentiating features, rather than work on solving spreadsheet imports.

Importing data is wrought with challenges and there are several critical features necessary to include when building a data importer. The alternative to a custom-built solution is to look for a pre-built data importer such as Portal.

Flatfile’s mission is to remove barriers between humans and data. With AI-assisted data onboarding, they eliminate repetitive work and make B2B data transactions fast, intuitive, and error-free. Flatfile automatically learns how imported data should be structured and cleaned, enabling customers and teams to spend more time using their data instead of fixing it. Flatfile has transformed over 300 million rows of data for companies like ClickUp, Blackbaud, Benevity, and Toast. To learn more about Flatfile’s products, Portal and Concierge, visit flatfile.io.



Source link

TypeScript, Minus TypeScript | CSS-Tricks
Strategy

TypeScript, Minus TypeScript | CSS-Tricks


Unless you’ve been hiding under a rock the last several years (and let’s face it, hiding under a rock sometimes feels like the right thing to do), you’ve probably heard of and likely used TypeScript. TypeScript is a syntactical superset of JavaScript that adds — as its name suggests — typing to the web’s favorite scripting language.

TypeScript is incredibly powerful, but is often difficult to read for beginners and carries the overhead of needing a compilation step before it can run in a browser due to the extra syntax that isn’t valid JavaScript. For many projects this isn’t a problem, but for others this might get in the way of getting work done. Fortunately the TypeScript team has enabled a way to type check vanilla JavaScript using JSDoc.

Setting up a new project

To get TypeScript up and running in a new project, you’ll need NodeJS and npm. Let’s start by creating a new project and running npm init. For the purposes of this article, we are going to be using VShttps://code.visualstudio.comCode as our code editor. Once everything is set up, we’ll need to install TypeScript:

npm i -D typescript

Once that install is done, we need to tell TypeScript what to do with our code, so let’s create a new file called tsconfig.json and add this:

{
  "compilerOptions": {
    "target": "esnext",
    "module": "esnext",
    "moduleResolution": "node",
    "lib": ["es2017", "dom"],
    "allowJs": true,
    "checkJs": true,
    "noEmit": true,
    "strict": false,
    "noImplicitThis": true,
    "alwaysStrict": true,
    "esModuleInterop": true
  },
  "include": [ "script", "test" ],
  "exclude": [ "node_modules" ]
}

For our purposes, the important lines of this config file are the allowJs and checkJs options, which are both set to true. These tell TypeScript that we want it to evaluate our JavaScript code. We’ve also told TypeScript to check all files inside of a /script directory, so let’s create that and a new file in it called index.js.

A simple example

Inside our newly-created JavaScript file, let’s make a simple addition function that takes two parameters and adds them together:

function add(x, y) {
  return x + y;
}

Fairly simple, right? add(4, 2) will return 6, but because JavaScript is dynamically-typed you could also call add with a string and a number and get some potentially unexpected results:

add('4', 2); // returns '42'

That’s less than ideal. Fortunately, we can add some JSDoc annotations to our function to tell users how we expect it to work:

/**
 * Add two numbers together
 * @param {number} x
 * @param {number} y
 * @return {number}
 */
function add(x, y) {
  return x + y;
}

We’ve changed nothing about our code; we’ve simply added a comment to tell users how the function is meant to be used and what value should be expected to return. We’ve done this by utilizing JSDoc’s @param and @return annotations with types set in curly braces ({}).

Trying to run our incorrect snippet from before throws an error in VS Code:

TypeScript evaluates that a call to add is incorrect if one of the arguments is a string.

In the example above, TypeScript is reading our comment and checking it for us. In actual TypeScript, our function now is equivalent of writing:

/**
 * Add two numbers together
 */
function add(x: number, y: number): number {
  return x + y;
}

Just like we used the number type, we have access to dozens of built-in types with JSDoc, including string, object, Array as well as plenty of others, like HTMLElement, MutationRecord and more.

One added benefit of using JSDoc annotations over TypeScript’s proprietary syntax is that it provides developers an opportunity to provide additional metadata around arguments or type definitions by providing those inline (hopefully encouraging positive habits of self-documenting our code).

We can also tell TypeScript that instances of certain objects might have expectations. A WeakMap, for instance, is a built-in JavaScript object that creates a mapping between any object and any other piece of data. This second piece of data can be anything by default, but if we want our WeakMap instance to only take a string as the value, we can tell TypeScript what we want:

/** @type {WeakMap<object, string} */
const metadata = new WeakMap();


const object = {};
const otherObject = {};


metadata.set(object, 42);
metadata.set(otherObject, 'Hello world');

This throws an error when we try to set our data to 42 because it is not a string.

Defining our own types

Just like TypeScript, JSDoc allows us to define  and work with our own types. Let’s create a new type called Person that has name, age and hobby properties. Here’s how that looks in TypeScript:

interface Person {
  name: string;
  age: number;
  hobby?: string;
}

In JSDoc, our type would be the following:

/**
 * @typedef Person
 * @property {string} name - The person's name
 * @property {number} age - The person's age
 * @property {string} [hobby] - An optional hobby
 */

We can use the @typedef tag to define our type’s name. Let’s define an interface called Person with required  name (a string)) and age (a number) properties, plus a third, optional property called hobby (a string). To define these properties, we use @property (or the shorthand @prop key) inside our comment.

When we choose to apply the Person type to a new object using the @type comment, we get type checking and autocomplete when writing our code. Not only that, we’ll also be told when our object doesn’t adhere to the contract we’ve defined in our file:

Screenshot of an example of TypeScript throwing an error on our vanilla JavaScript object

Now, completing the object will clear the error:

Our object now adheres to the Person interface defined above

Sometimes, however, we don’t want a full-fledged object for a type. For example, we might want to provide a limited set of possible options. In this case, we can take advantage of something called a union type:

/**
 * @typedef {'cat'|'dog'|'fish'} Pet
 */


/**
 * @typedef Person
 * @property {string} name - The person's name
 * @property {number} age - The person's age
 * @property {string} [hobby] - An optional hobby
 * @property {Pet} [pet] - The person's pet
 */

In this example, we have defined a union type called Pet that can be any of the possible options of 'cat', 'dog' or 'fish'. Any other animals in our area are not allowed as pets, so if caleb above tried to adopt a 'kangaroo' into his household, we would get an error:

/** @type {Person} */
const caleb = {
  name: 'Caleb Williams',
  age: 33,
  hobby: 'Running',
  pet: 'kangaroo'
};
Screenshot of an an example illustrating that kangaroo is not an allowed pet type

This same technique can be utilized to mix various types in a function:

/**
 * @typedef {'lizard'|'bird'|'spider'} ExoticPet
 */


/**
 * @typedef Person
 * @property {string} name - The person's name
 * @property {number} age - The person's age
 * @property {string} [hobby] - An optional hobby
 * @property {Pet|ExoticPet} [pet] - The person's pet
 */

Now our person type can have either a Pet or an ExoticPet.

Working with generics

There could be times when we don’t want hard and fast types, but a little more flexibility while still writing consistent, strongly-typed code. Enter generic types. The classic example of a generic function is the identity function, which takes an argument and returns it back to the user. In TypeScript, that looks like this:

function identity<T>(target: T): T {
  return target;
}

Here, we are defining a new generic type (T) and telling the computer and our users that the function will return a value that shares a type with whatever the argument target is. This way, we can still pass in a number or a string or an HTMLElement and have the assurance that the returned value is also of that same type.

The same thing is possible using the JSDoc notation using the @template annotation:

/**
 * @template T
 * @param {T} target
 * @return {T}
 */
function identity(target) {
  return x;
}

Generics are a complex topic, but for more detailed documentation on how to utilize them in JSDoc, including examples, you can read the Google Closure Compiler page on the topic.

Type casting

While strong typing is often very helpful, you may find that TypeScript’s built-in expectations don’t quite work for your use case. In that sort of instance, we might need to cast an object to a new type. One instance of when this might be necessary is when working with event listeners.

In TypeScript, all event listeners take a function as a callback where the first argument is an object of type Event, which has a property, target, that is an EventTarget. This is the correct type per the DOM standard, but oftentimes the bit of information we want out of the event’s target doesn’t exist on EventTarget — such as the value property that exists on HTMLInputElement.prototype. That makes the following code invalid:

document.querySelector('input').addEventListener(event => {
  console.log(event.target.value);
};

TypeScript will complain that the property value doesn’t exist on EventTarget even though we, as developers, know fully well that an <input> does have a value.

A screenshot showing that value doesn’t exist on type EventTarget

In order for us to tell TypeScript that we know event.target will be an HTMLInputElement, we must cast the object’s type:

document.getElementById('input').addEventListener('input', event => {
  console.log(/** @type {HTMLInputElement} */(event.target).value);
});

Wrapping event.target in parenthesis will set it apart from the call to value. Adding the type before the parenthesis will tell TypeScript we mean that the event.target is something different than what it ordinarily expects.

Screenshot of a valid example of type casting in VS Code.

And if a particular object is being problematic, we can always tell TypeScript an object is @type {any} to ignore error messages, although this is generally considered bad practice depsite being useful in a pinch.

Wrapping up

TypeScript is an incredibly powerful tool that many developers are using to streamline their workflow around consistent code standards. While most applications will utilize the built-in compiler, some projects might decide that the extra syntax that TypeScript provides gets in the way. Or perhaps they just feel more comfortable sticking to standards rather than being tied to an expanded syntax. In those cases, developers can still get the benefits of utilizing TypeScript’s type system even while writing vanilla JavaScript.



Source link

r/webdev - Help with Flask Login (with MongoDB)
Strategy

Help with Flask Login (with MongoDB) : webdev


Hi everyone,

I think I just need a little guidance on this one.

I have been following along with Pretty Printed’s Youtube tutorial for a Flask MongoDB login set up ( https://www.youtube.com/watch?v=vVx1737auSE ).

The registration part works fine –

@app.route('/register', methods=['POST', 'GET'])
def register():
    if request.method == 'POST':
        users = mongo.db.users
        existing_user = users.find_one({'name': request.form['username']})

        if existing_user is None:
            hashpass = bcrypt.hashpw(request.form['pass'].encode('utf-8'), bcrypt.gensalt())
            users.insert_one({'name': request.form['username'], 'password': hashpass})
            session['username'] = request.form['username']
            return redirect(url_for('index'))

        return 'That username already exists!'

    return render_template('register.html')

but for some reason, I can’t log back in with the registered user

@app.route('/')
def index():
    if 'username' in session:
        return redirect(url_for('warehouse'))

    return render_template('index.html')


@app.route('/login', methods=['POST', 'GET'])
def log_in():
    if 'username' in session:
        return render_template('warehouse_calendar.html')

    return render_template('login.html')

@app.route('/login-post', methods=['POST'])
def login_post():
    users = mongo.db.users
    login_user = users.find_one({'name': request.form['username']})

    if login_user:
        if bcrypt.hashpw(request.form['pass'].encode('utf-8'), login_user['password'].encode('utf-8')) == login_user['password'].encode('utf-8'):
            session['username'] = request.form['username']
            return redirect(url_for('warehouse'))

    return 'Invalid username/password combo'

When I try to log in, I get the following –

r/webdev - Help with Flask Login (with MongoDB)

According to the comments under the Youtube video, if you are using Python3, you shouldn’t have .encode(‘utf-8’) in this line

if bcrypt.hashpw(request.form[‘pass’].encode(‘utf-8’), login_user[‘password’].encode(‘utf-8’)) == login_user[‘password’].encode(‘utf-8’):

However if I take it out I get a unicode error

r/webdev - Help with Flask Login (with MongoDB)

I can’t work out where I’ve gone wrong so any advice would be appreciated.

Thank you!



Source link

r/graphic_design - halftone process / How to achieve better results?
Strategy

halftone process / How to achieve better results? : graphic_…


Can someone guide me in the right direction to achieve better halftone results in photoshop? Much like the referenced Images below?

Am currently just running the photoshop process of using the pixelate>halftone feature but am not getting anywhere near the refined results like below.

Do you guys paint more contrast into the image before converting to halftones / or use gradients to achieve better results? would love some isight into the process people use to get these better, more refined halftones!

Appreciate any advice,

Cheers.

r/graphic_design - halftone process / How to achieve better results?
r/graphic_design - halftone process / How to achieve better results?



Source link

Chapter 1: Birth | CSS-Tricks
Strategy

Chapter 1: Birth | CSS-Tricks


Tim Berners-Lee is fascinated with information. It has been his life’s work. For over four decades, he has sought to understand how it is mapped and stored and transmitted. How it passes from person to person. How the seeds of information become the roots of dramatic change. It is so fundamental to the work that he has done that when he wrote the proposal for what would eventually become the World Wide Web, he called it “Information Management, a Proposal.”

Information is the web’s core function. A series of bytes stream across the world and at the end of it is knowledge. The mechanism for this transfer — what we know as the web — was created by the intersection of two things. The first is the Internet, the technology that makes it all possible. The second is hypertext, the concept that grounds its use. They were brought together by Tim Berners-Lee. And when he was done he did something truly spectacular. He gave it away to everyone to use for free.

When Berners-Lee submitted “Information Management, a Proposal” to his superiors, they returned it with a comment on the top that read simply:

Vague, but exciting…

The web wasn’t a sure thing. Without the hindsight of today it looked far too simple to be effective. In other words, it was a hard sell. Berners-Lee was proficient at many things, but he was never a great salesman. He loved his idea for the web. But he had to convince everybody else to love it too.


Tim Berners-Lee has a mind that races. He has been known — based on interviews and public appearances — to jump from one idea to the next. He is almost always several steps ahead of what he is saying, which is often quite profound. Until recently, he only gave a rare interview here and there, and masked his greatest achievements with humility and a wry British wit.

What is immediately apparent is that Tim Berners-Lee is curious. Curious about everything. It has led him to explore some truly revolutionary ideas before they became truly revolutionary. But it also means that his focus is typically split. It makes it hard for him to hold on to things in his memory. “I’m certainly terrible at names and faces,” he once said in an interview. His original fascination with the elements for the web came from a very personal need to organize his own thoughts and connect them together, disparate and unconnected as they are. It is not at all unusual that when he reached for a metaphor for that organization, he came up with a web.

As a young boy, his curiosity was encouraged. His parents, Conway Berners-Lee and Mary Lee Woods, were mathematicians. They worked on the Ferranti Mark I, the world’s first commercially available computer, in the 1950s. They fondly speak of Berners-Lee as a child, taking things apart, experimenting with amateur engineering projects. There was nothing that he didn’t seek to understand further. Electronics — and computers specifically — were particularly enchanting.

Berners-Lee sometimes tells the story of a conversation he had with his with father as a young boy about the limitations of computers making associations between information that was not intrinsically linked. “The idea stayed with me that computers could be much more powerful,” Berners-Lee recalls, “if they could be programmed to link otherwise unconnected information. In an extreme view, the world can been seen as only connections.” He didn’t know it yet, but Berners-Lee had stumbled upon the idea of hypertext at a very early age. It would be several years before he would come back to it.


History is filled with attempts to organize knowledge. An oft-cited example is the Library of Alexandria, a fabled library of Ancient Greece that was thought to have had tens of thousands of meticulously organized texts.

Photo via

At the turn of the century, Paul Otlet tried something similar in Belgium. His project was called the Répertoire Bibliographique Universel (Universal Bibliography). Otlet and a team of researchers created a library of over 15 million index cards, each with a discrete and small piece of information in topics ranging from science to geography. Otlet devised a sophisticated numbering system that allowed him to link one index card to another. He fielded requests from researchers around the world via mail or telegram, and Otlet’s researchers could follow a trail of linked index cards to find an answer. Once properly linked, information becomes infinitely more useful.

A sudden surge of scientific research in the wake of World War II prompted Vanneaver Bush to propose another idea. In his groundbreaking essay in The Atlantic in 1945 entitled “As We May Think,” Bush imagined a mechanical library called a Memex. Like Otlet’s Universal Bibliography, the Memex stored bits of information. But instead of index cards, everything was stored on compact microfilm. Through the process of what he called “associative indexing,” users of the Memex could follow trails of related information through an intricate web of links.

The list of attempts goes on. But it was Ted Neslon who finally gave the concept a name in 1968, two decades after Bush’s article in The Atlantic. He called it hypertext.

Hypertext is, essentially, linked text. Nelson observed that in the real world, we often give meaning to the connections between concepts; it helps us grasp their importance and remember them for later. The proximity of a Post-It to your computer, the orientation of ingredients in your refrigerator, the order of books on your bookshelf. Invisible though they may seem, each of these signifiers hold meaning, whether consciously or subconsciously, and they are only fully realized when taking a step back. Hypertext was a way to bring those same kinds of meaningful connections to the digital world.

Nelson’s primary contribution to hypertext is a number of influential theories and a decades-long project still in progress known as Xanadu. Much like the web, Xanadau uses the power of a network to create a global system of links and pages. However, Xanadu puts a far greater emphasis on the ability to trace text to its original author for monetization and attribution purposes. This distinction, known as transculsion, has been a near impossible technological problem to solve.

Nelson’s interest in hypertext stems from the same issue with memory and recall as Berners-Lee. He refers to it is as his hummingbird mind. Nelson finds it hard to hold on to associations he creates in the real world. Hypertext offers a way for him to map associations digitally, so that he can call on them later. Berners-Lee and Nelson met for the first time a couple of years after the web was invented. They exchanged ideas and philosophies, and Berners-Lee was able to thank Nelson for his influential thinking. At the end of the meeting, Berners-Lee asked if he could take a picture. Nelson, in turn, asked for a short video recording. Each was commemorating the moment they knew they would eventually forget. And each turned to technology for a solution.

By the mid-80s, on the wave of innovation in personal computing, there were several hypertext applications out in the wild. The hypertext community — a dedicated group of software engineers that believed in the promise of hypertext – created programs for researchers, academics, and even off-the-shelf personal computers. Every research lab worth their weight in salt had a hypertext project. Together they built entirely new paradigms into their software, processes and concepts that feel wonderfully familiar today but were completely outside the realm of possibilities just a few years earlier.

At Brown University, the very place where Ted Nelson was studying when he coined the term hypertext, Norman Meyrowitz, Nancy Garrett, and Karen Catlin were the first to breathe life into the hyperlink, which was introduced in their program Intermedia. At Symbolics, Janet Walker was toying with the idea of saving links for later, a kind of speed dial for the digital world – something she was calling a bookmark. At the University of Maryland, Ben Schneiderman sought to compile and link the world’s largest source of information with his Interactive Encyclopedia System.

Dame Wendy Hall, at the University of Southhampton, sought to extend the life of the link further in her own program, Microcosm. Each link made by the user was stored in a linkbase, a database apart from the main text specifically designed to store metadata about connections. In Microcosm, links could never die, never rot away. If their connection was severed they could point elsewhere since links weren’t directly tied to text. You could even write a bit of text alongside links, expanding a bit on why the link was important, or add to a document separate layers of links, one, for instance, a tailored set of carefully curated references for experts on a given topic, the other a more laid back set of links for the casual audience.

There were mailing lists and conferences and an entire community that was small, friendly, fiercely competitive and locked in an arms race to find the next big thing. It was impossible not to get swept up in the fervor. Hypertext enabled a new way to store actual, tangible knowledge; with every innovation the digital world became more intricate and expansive and all-encompassing.

Then came the heavy hitters. Under a shroud of mystery, researchers and programmers at the legendary Xerox PARC were building NoteCards. Apple caught wind of the idea and found it so compelling that they shipped their own hypertext application called Hypercard, bundled right into the Mac operating system. If you were a late Apple II user, you likely have fond memories of Hypercard, an interface that allowed you to create a card, and quickly link it to another. Cards could be anything, a recipe maybe, or the prototype of a latest project. And, one by one, you could link those cards up, visually and with no friction, until you had a digital reflection of your ideas.

Towards the end of the 80s, it was clear that hypertext had a bright future. In just a few short years, the software had advanced in leaps and bounds.


After a brief stint studying physics at The Queen’s College, Oxford, Tim Berners-Lee returned to his first love: computers. He eventually found a short-term, six-month contract at the particle physics lab Conseil Européen pour la Recherche Nucléaire (European Council for Nuclear Research), or simply, CERN.

CERN is responsible for a long line of particle physics breakthroughs. Most recently, they built the Large Hadron Collider, which led to the confirmation of the Higgs Boson particle, a.k.a. the “God particle.”

CERN doesn’t operate like most research labs. Its internal staff makes up only a small percentage of the people that use the lab. Any research team from around the world can come and use the CERN facilities, provided that they are able to prove their research fits within the stated goals of the institution. A majority of CERN occupants are from these research teams. CERN is a dynamic, sprawling campus of researchers, ferrying from location to location on bicycles or mine-carts, working on the secrets of the universe. Each team is expected to bring their own equipment and expertise. That includes computers.

Berners-Lee was hired to assist with software on an earlier version of the particle accelerator called the Proton Synchrotron. When he arrived, he was blown away by the amount of pure, unfiltered information that flowed through CERN. It was nearly impossible to keep track of it all and equally impossible to find what you were looking for. Berners-Lee wanted to capture that information and organize it.

His mind flashed back to that conversation with his father all those years ago. What if it were possible to create a computer program that allowed you to make random associations between bits of information? What if you could, in other words, link one thing to another? He began working on a software project on the side for himself. Years later, that would be the same way he built the web. He called this project ENQUIRE, named for a Victorian handbook he had read as a child.

Using a simple prompt, ENQUIRE users could create a block of info, something like Otlet’s index cards all those years ago. And just like the Universal Bibliography, ENQUIRE allowed you to link one block to another. Tools were bundled in to make it easier to zoom back and see the connections between the links. For Berners-Lee this filled a simple need: it replaced the part of his memory that made it impossible for him to remember names and faces with a digital tool.

Compared to the software being actively developed at the University of Southampton or at Xerox or Apple, ENQUIRE was unsophisticated. It lacked a visual interface, and its format was rudimentary. A program like Hypercard supported rich-media and advanced two-way connections. But ENQUIRE was only Berners-Lee’s first experiment with hypertext. He would drop the project when his contract was up at CERN.

Berners-Lee would go and work for himself for several years before returning to CERN. By the time he came back, there would be something much more interesting for him to experiment with. Just around the corner was the Internet.


Packet switching is the single most important invention in the history of the Internet. It is how messages are transmitted over a globally decentralized network. It was discovered almost simultaneously in the late-60s by two different computer scientists, Donald Davies and Paul Baran. Both were interested in the way in which it made networks resilient.

Traditional telecommunications at the time were managed by what is known as circuit switching. With circuit switching, a direct connection is open between the sender and receiver, and the message is sent in its entirety between the two. That connection needs to be persistent and each channel can only carry a single message at a time. That line stays open for the duration of a message and everything is run through a centralized switch. 

If you’re searching for an example of circuit switching, you don’t have to look far. That’s how telephones work (or used to, at least). If you’ve ever seen an old film (or even a TV show like Mad Men) where an operator pulls a plug out of a wall and plugs it back in to connect a telephone call, that’s circuit switching (though that was all eventually automated). Circuit switching works because everything is sent over the wire all at once and through a centralized switch. That’s what the operators are connecting.

Packet switching works differently. Messages are divided into smaller bits, or packets, and sent over the wire a little at a time. They can be sent in any order because each packet has just enough information to know where in the order it belongs. Packets are sent through until the message is complete, and then re-assembled on the other side. There are a few advantages to a packet-switched network. Multiple messages can be sent at the same time over the same connection, split up into little packets. And crucially, the network doesn’t need centralization. Each node in the network can pass around packets to any other node without a central routing system. This made it ideal in a situation that requires extreme adaptability, like in the fallout of an atomic war, Paul Baran’s original reason for devising the concept.

When Davies began shopping around his idea for packet switching to the telecommunications industry, he was shown the door. “I went along to Siemens once and talked to them, and they actually used the words, they accused me of technical — they were really saying that I was being impertinent by suggesting anything like packet switching. I can’t remember the exact words, but it amounted to that, that I was challenging the whole of their authority.” Traditional telephone companies were not at all interested in packet switching. But ARPA was.

ARPA, later known as DARPA, was a research agency embedded in the United States Department of Defense. It was created in the throes of the Cold War — a reaction to the launch of the Sputnik satellite by Russia — but without a core focus. (It was created at the same time as NASA, so launching things into space was already taken.) To adapt to their situation, ARPA recruited research teams from colleges around the country. They acted as a coordinator and mediator between several active university research projects with a military focus.

ARPA’s organization had one surprising and crucial side effect. It was comprised mostly of professors and graduate students who were working at its partner universities. The general attitude was that as long as you could prove some sort of modest relation to a military application, you could pitch your project for funding. As a result, ARPA was filled with lots of ambitious and free-thinking individuals working inside of a buttoned-up government agency, with little oversight, coming up with the craziest and most world-changing ideas they could. “We expected that a professional crew would show up eventually to take over the problems we were dealing with,” recalls Bob Kahn, an ARPA programmer critical to the invention of the Internet. The “professionals” never showed up.

One of those professors was Leonard Kleinrock at UCLA. He was involved in the first stages of ARPANET, the network that would eventually become the Internet. His job was to help implement the most controversial part of the project, the still theoretical concept known as packet switching, which enabled a decentralized and efficient design for the ARPANET network. It is likely that the Internet would not have taken shape without it. Once packet switching was implemented, everything came together quickly. By the early 1980s, it was simply called the Internet. By the end of the 1980s, the Internet went commercial and global, including a node at CERN.

Once packet switching was implemented, everything came together quickly. By the early 1980s, it was simply called the Internet.

The first applications of the Internet are still in use today. FTP, used for transferring files over the network, was one of the first things built. Email is another one. It had been around for a couple of decades on a closed system already. When the Internet began to spread, email became networked and infinitely more useful.

Other projects were aimed at making the Internet more accessible. They had names like Archie, Gopher, and WAIS, and have largely been forgotten. They were united by a common goal of bringing some order to the chaos of a decentralized system. WAIS and Archie did so by indexing the documents put on the Internet to make them searchable and findable by users. Gopher did so with a structured, hierarchical system. 

Kleinrock was there when the first message was ever sent over the Internet. He was supervising that part of the project, and even then, he knew what a revolutionary moment it was. However, he is quick to note that not everybody shared that feeling in the beginning. He recalls the sentiment held by the titans of the telecommunications industry like the Bell Telephone Company. “They said, ‘Little boy, go away,’ so we went away.” Most felt that the project would go nowhere, nothing more than a technological fad.

In other words, no one was paying much attention to what was going on and no one saw the Internet as much of a threat. So when that group of professors and graduate students tried to convince their higher-ups to let the whole thing be free — to let anyone implement the protocols of the Internet without a need for licenses or license fees — they didn’t get much pushback. The Internet slipped into public use and only the true technocratic dreamers of the late 20th century could have predicted what would happen next.


Berners-Lee returned to CERN in a fellowship position in 1984. It was four years after he had left. A lot had changed. CERN had developed their own network, known as CERNET, but by 1989, they arrived and hooked up to the new, internationally standard Internet. “In 1989, I thought,” he recalls, “look, it would be so much easier if everybody asking me questions all the time could just read my database, and it would be so much nicer if I could find out what these guys are doing by just jumping into a similar database of information for them.” Put another way, he wanted to share his own homepage, and get a link to everyone else’s.

What he needed was a way for researchers to share these “databases” without having to think much about how it all works. His way in with management was operating systems. CERN’s research teams all bring their own equipment, including computers, and there’s no way to guarantee they’re all running the same OS. Interoperability between operating systems is a difficult problem by design — generally speaking — the goal of an OS is to lock you in. Among its many other uses, a globally networked hypertext system like the web was a wonderful way for researchers to share notes between computers using different operating systems.

However, Berners-Lee had a bit of trouble explaining his idea. He’s never exactly been concise. By 1989, when he wrote “Information Management, a Proposal,” Berners-Lee already had worldwide ambitions. The document is thousands of words, filled with diagrams and charts. It jumps energetically from one idea to the next without fully explaining what’s just been said. Much of what would eventually become the web was included in the document, but it was just too big of an idea. It was met with a lukewarm response — that “Vague, but exciting” comment scrawled across the top.

A year later, in May of 1990, at the encouragement of his boss Mike Sendall (the author of that comment), Beners-Lee circulated the proposal again. This time it was enough to buy him a bit of time internally to work on it. He got lucky. Sendall understood his ambition and aptitude. He wouldn’t always get that kind of chance. The web needed to be marketed internally as an invaluable tool. CERN needed to need it. Taking complex ideas and boiling them down to their most salient, marketable points, however, was not Berners-Lee’s strength. For that, he was going to need a partner. He found one in Robert Cailliau.

Cailliau was a CERN veteran. By 1989, he’d worked there as a programmer for over 15 years. He’d embedded himself in the company culture, proving a useful resource helping teams organize their informational toolset and knowledge-sharing systems. He had helped several teams at CERN do exactly the kind of thing Berners-Lee was proposing, though at a smaller scale.

Temperamentally, Cailliau was about as different from Berners-Lee as you could get. He was hyper-organized and fastidious. He knew how to sell things internally, and he had made plenty of political inroads at CERN. What he shared with Berners-Lee was an almost insatiable curiosity. During his time as a nurse in the Belgian military, he got fidgety. “When there was slack at work, rather than sit in the infirmary twiddling my thumbs, I went and got myself some time on the computer there.” He ended up as a programmer in the military, working on war games and computerized models. He couldn’t help but look for the next big thing.

In the late 80s, Cailliau had a strong interest in hypertext. He was taking a look at Apple’s Hypercard as a potential internal documentation system at CERN when he caught wind of Berners-Lee’s proposal. He immediately recognized its potential.

Working alongside Berners-Lee, Cailliau pieced together a new proposal. Something more concise, more understandable, and more marketable. While Berners-Lee began putting together the technologies that would ultimately become the web, Cailliau began trying to sell the idea to interested parties inside of CERN.

The web, in all of its modern uses and ubiquity can be difficult to define as just one thing — we have the web on our refrigerators now. In the beginning, however, the web was made up of only a few essential features.

There was the web server, a computer wired to the Internet that can transmit documents and media (webpages) to other computers. Webpages are served via HTTP, a protocol designed by Berners-Lee in the earliest iterations of the web. HTTP is a layer on top of the Internet, and was designed to make things as simple, and resilient, as possible. HTTP is so simple that it forgets a request as soon as it has made it. It has no memory of the webpages its served in the past. The only thing HTTP is concerned with is the request it’s currently making. That makes it magnificently easy to use.

These webpages are sent to browsers, the software that you’re using to read this article. Browsers can read documents handed to them by server because they understand HTML, another early invention of Tim Berners-Lee. HTML is a markup language, it allows programmers to give meaning to their documents so that they can be understood. The “H” in HTML stands for Hypertext. Like HTTP, HTML — all of the building blocks programmers can use to structure a document — wasn’t all that complex, especially when compared to other hypertext applications at the time. HTML comes from a long line of other, similar markup languages, but Berners-Lee expanded it to include the link, in the form of an anchor tag. The <a> tag is the most important piece of HTML because it serves the web’s greatest function: to link together information.

The hyperlink was made possible by the Universal Resource Identifier (URI) later renamed to the Uniform Resource Indicator after the IETF found the word “universal” a bit too substantial. But for Berners-Lee, that was exactly the point. “Its universality is essential: the fact that a hypertext link can point to anything, be it personal, local or global, be it draft or highly polished,” he wrote in his personal history of the web. Of all the original technologies that made up the web, Berners-Lee — and several others — have noted that the URL was the most important.

By Christmas of 1990, Tim Berners-Lee had all of that built. A full prototype of the web was ready to go.

Cailliau, meanwhile, had had a bit of success trying to sell the idea to his bosses. He had hoped that his revised proposal would give him a team and some time. Instead he got six months and a single staff member, intern Nicola Pellow. Pellow was new to CERN, on placement for her mathematics degree. But her work on the Line Mode Browser, which enabled people from around the world using any operating system to browse the web, proved a crucial element in the web’s early success. Berners-Lee’s work, combined with the Line Mode Browser, became the web’s first set of tools. It was ready to show to the world.


When the team at CERN submitted a paper on the World Wide Web to the San Antonio Hypertext Conference in 1991, it was soundly rejected. They went anyway, and set up a table with a computer to demo it to conference attendees. One attendee remarked:

They have chutzpah calling that the World Wide Web!

The highlight of the web is that it was not at all sophisticated. Its use of hypertext was elementary, allowing for only simplistic text based links. And without two-way links, pretty much a given in hypertext applications, links could go dead at any minute. There was no linkbase, or sophisticated metadata assigned to links. There was just the anchor tag. The protocols that ran on top of the Internet were similarly basic. HTTP only allowed for a handful of actions, and alternatives like Gopher or WAIS offered far more options for advanced connections through the Internet network.

It was hard to explain, difficult to demo, and had overly lofty ambition. It was created by a man who didn’t have much interest in marketing his ideas. Even the name was somewhat absurd. “WWW” is one of only a handful of acronyms that actually takes longer to say than the full “World Wide Web.”

We know how this story ends. The web won. It’s used by billions of people and runs through everything we do. It is among the most remarkable technological achievements of the 20th century.

It had a few advantages, of course. It was instantly global and widely accessible thanks to the Internet. And the URL — and its uniqueness — is one of the more clever concepts to come from networked computing.

But if you want to truly understand why the web succeeded we have to come back to information. One of Berners-Lee’s deepest held beliefs is that information is incredibly powerful, and that it deserves to be free. He believed that the Web could deliver on that promise. For it to do that, the web would need to spread.

Berners-Lee looked to his successors for inspiration: the Internet. The Internet succeeded, in part, because they gave it away to everyone. After considering several licensing options, he lobbied CERN to release the web unlicensed to the general public. CERN, an organization far more interested in particle physics breakthroughs than hypertext, agreed. In 1993, the web officially entered the public domain.

And that was the turning point. They didn’t know it then, but that was the moment the web succeeded. When Berners-Lee was able to make globally available information truly free.

In an interview some years ago, Berners-Lee recalled how it was that the web came to be.

I had the idea for it. I defined how it would work. But it was actually created by people.

That may sound like humility from one of the world’s great thinkers — and it is that a little — but it is also the truth. The web was Berners-Lee’s gift to the world. He gave it to us, and we made it what it was. He and his team fought hard at CERN to make that happen.

Berners-Lee knew that with the resources available to him he would never be able to spread the web sufficiently outside of the hallways of CERN. Instead, he packaged up all the code that was needed to build a browser into a library called libwww and posted it to a Usenet group. That was enough for some people to get interested in browsers. But before browsers would be useful, you needed something to browse.

Enjoy learning about web history with stories just like this? Jay is telling the full story of the web, with new stories every 2 weeks. Sign up for his newsletter to catch up on the latest… of what’s past.



Source link

When to Use Go vs. Java | One Programmer’s Take on Two Top L...
Strategy

When to Use Go vs. Java | One Programmer’s Take on Two Top L…


Go vs. Java

I can honestly say I have enjoyed working with Java for quite some time now. I built up my expertise in software development working with backend technologies like EJB2, DB2, and Oracle over the last years at Spiral Scout. Over the years, I moved towards natural language processing-based bots including Spring Boot, Redis, RabbitMQ, Open NLP, IBM Watson, and UIMA. For years, my language of choice was Java and that has worked effectively, even being enjoyable to use at times.

Testing Out Go to Start

In early 2017, I took over a very interesting project that revolved around programming automated systems for monitoring and growing hydroponic plants. The original code base for the software incorporated Go gateways for three different systems — Windows, MacOS, and ARM.

Completely new to Go, my job quickly evolved into a mix of both learning and simultaneously implementing this new language as the project progressed. The challenge was daunting, especially because of the convoluted structure of the existing code base. Supporting a program with platform-specific parts written in CGo for three different operating systems essentially meant deploying, testing, and running maintenance for three different systems. In addition, the code was written with a singleton design pattern making the systems heavily interdependent, often unpredictable, and rather difficult to understand. In the end, I opted to engineer a new version of the gateway with a more Java-esque approach and that too ended up being rather ugly and confounding.

When I moved to Spiral Scout, where I currently serve as a tech lead for one of our biggest U.S. clients, I stopped trying to tap into my Java wheelhouse when developing with Go. Instead, I decided to embrace the language by developing in the most possible Go way and having fun with it. I found it to be an innovative and comprehensive language and our team still uses it daily for a variety of projects.

Like with any programming language, however, Go has its weak points and disadvantages, and I won’t lie, there are times where I really miss Java.

If my experience with programming has taught me anything, it’s that there is no silver bullet when it comes to software development. I’ll detail below how one traditional language and one new kid on the block worked in my experience.

Go vs. Java: The Similarities

Go and Java are both C-family languages which means they share a similar language syntax. That’s why Java developers often find reading Go code fairly easy and vice versa. Go does not use a semicolon (‘;’) symbol at the end of a statement though except in occasional cases where it is needed. To me, the line-delimited statements of Go feel clearer and more readable.

Go and Java also both use one of my favorite features, garbage collector (GC), to help prevent memory leaks. Unlike C++, C-family programmers have to worry about memory leaks and garbage collector is one of those features that automates memory management and therefore simplifies their job.

Go’s GC does not use the Weak Generational Hypothesis, however, it still performs very well and has a very short Stop-the-World (STW) pause. With version 1.5, the STW was reduced even more and essentially became constant, and with version 1.8, it dropped to less than 1ms.

Go’s GC only has a few settings though, i.e. the sole GOGC variable which sets the initial garbage collection target percentage. In Java, you have 4 different garbage collectors and tons of settings for each.

While Java and Go are both considered cross-platform, Java needs the Java Virtual Machine (JVM) to interpret compiled code. Go simply compiles the code into a binary file for any given platform. I actually consider Java less platform-dependent than Go because Go requires you to create a binary file every time you compile code for a single platform. Compiling binaries for different platforms separately is quite time-consuming from a testing and DevOps point of view and cross-platform Go compilation does not work in certain cases, especially when we use CGo parts. Meanwhile, with Java, you can use the same jar anywhere you have the JVM. Go requires less RAM and no additional considerations regarding installing and managing the virtual machine.

Reflection. Unlike Java where reflection is convenient, popular, and commonly used, Go’s reflection seems more complicated and less obvious. Java is an object-oriented language so everything except the primitive is considered an object. If you want to use reflection, you can create a class for the object and get the desired information from the class, like this:

This gives you access to constructors, methods, and properties so that you can invoke or set them.

In Go there is no notion of class and a structure contains only the declared fields. So we need to have the package “reflection” in order to provide the desired information:

type

I realize this isn’t a huge problem, but since there are no constructors for the structures in Go, the result is many primitive types that have to be processed separately as well as pointers that have to be taken into account. In Go, we can also pass something by pointer or value. The Go structure can have a function as a field, not a method. All this makes the reflection in Go more complex and less usable.

Accessibility. Java has private, protected and public modifiers in order to provide the scopes with different access for the data, methods, and objects. Go has exported/unexported identifiers that is similar to public and private modifiers in Java. There are no modifiers. Anything that starts with an uppercase letter is exported and will be visible in other packages. Unexported — lowercase variables or functions will only be visible from the current package.

Go vs. Java: The Big Differences

Golang is not an OOP language. At its core, Go lacks the inheritance of Java because it does not implement traditional polymorphism by inheritance. In fact, it has no objects, only structures. It can simulate some object-oriented patterns by providing interfaces and implementing the interfaces for structures. Also, you can embed structures to each other but embedded structures do not have any access to the hosting structure’s data and methods. Go uses composition instead of inheritance in order to combine some desired behavior and data.

Go is an imperative language and Java tends to be a declarative language. In Go, we don’t have anything like dependency injection; instead, we have to wrap up everything together explicitly. That’s why the recommended approach to programming in Go is to use as little magic as possible. Everything should be really obvious for an external code reviewer and a programmer should understand all the mechanisms for how the Go code uses the memory, file system, and other resources.

Java, on the other hand, requires more of a developer’s attention toward custom-writing the business logic portion of a program to determine how data is created, filtered, changed, and stored. As far as the system infrastructure and database management are concerned — all that is done by configuration and annotations via generic frameworks like Spring Boot. We concern ourselves less with the boring droll of repeating infrastructure parts and leave that to the framework. This is convenient but also inverts control and limits our ability to optimize the overall process.

The order of the variable’s definition. In Java you can write something like:

. . . but in Go you would write:

This was obviously confusing when I first started working with Go.

The Pros of Using Go

Simple and elegant concurrency. Go has a powerful model of concurrency called “communicating sequential processes” or CSP. Go uses an n-to-m profiler which allows m concurrent executions to happen in n system threads. The concurrent routine can be started in a very basic way simply using the keywords of the language, the same as the language’s name. For example, a coder can write the string:

. . . and the function doMyWork() will start executing concurrently.

The communication between the processes can be done with the shared memory (not recommended) and channels. It allows for very robust and fluid parallelism that uses as many cores of the processes as we define with the GOMAXPROCS environment variable.
By default, the number of processes is equal to the number of cores.

Go provides a special mode to run binary with a check for run race situations. This way you can test and prove that your software is concurrency safe.

This will run the application in run race detection mode.

I really appreciate that Go provides very useful, basic functionality right out-of-the-box (https://golang.org/dl/). One go-to example of this is the synchronization “sync” https://golang.org/pkg/sync/ package for concurrency. For “Once” group type singleton implementations you can write:

Package sync also provides a structure for concurrent map implementation, mutexes, condition variables, and wait groups. Package “atomic” https://golang.org/pkg/sync/atomic/ additionally allows for concurrency safe conversion and math operations — essentially everything we need for making a concurrency ready code.

Pointers. With pointers, Go allows for more control over how to allocate memory, garbage collector payload, and other interesting performance tweaks that are impossible with Java. Go feels like a more low-level language than Java and favors much easier and faster performance optimizations.

Duck typing. “If it walks like a duck and it quacks like a duck, then it must be a duck.” This saying holds true with Go: there is no need to define that a certain structure implements a given interface. If the structure has the methods with the same signatures in a given interface then it implements it. This is very helpful. As the client of a library, you can define any interfaces you need for external libraries structures. In Java, an Object has to explicitly declare that it implements the interface.

The profiler. Go’s profiling tools make analyzing performance issues convenient, quick and easy. The profiler in Go helps reveal the memory allocations and CPU usage for all parts of a program and can illustrate them in a visual graph making it extremely easy to take action around optimizing performance. Java also has many profilers, starting from Java VisualVM, but they are not as simple as the Go profiler. Their efficacy instead depends on the work of the JVM so the statistics obtained from them correlates with the work of the garbage collector.

CGO. Go allows for a very simple yet powerful integration of C so you are able to write platform-dependent applications with snippets of C code inside of your Go project. Essentially, CGo enables developers to create Go packages that call C code. There are various builder options in order to exclude/include C code snippet for a given platform which allows for multi-platform realizations of applications.

Function as an argument. A Go function may be used as a variable, passed into another function, or serve as a field of a structure. This versatility is refreshing. Starting from 1.8 version of Java, it incorporates the use of lambdas, they are not really functions, but one-function objects. While this facilitates a behavior similar to using functions in Go, the idea existed in Go from the very beginning.

Clear guidelines for the code style. The community behind Go is supportive and passionate. There is a ton of information out there about the best Go-way to do things with examples and explanations https://golang.org/doc/effective_go.html

Functions can return many arguments. This is also quite useful and nice.

The Cons of Using Go

No polymorphism except with interfaces. There is no ad-hoc polymorphism in Go which means if you have two functions in the same package with different arguments but the same sense, then you will have to give them different names. For example, with this code . . .

… you wind up with many methods doing the same thing but all with different and ugly names.

Additionally, there is no polymorphism by inheritance. If you embed a structure then the embedded structure knows only its own methods, it does not know the methods of the “hosting” structure. This is particularly challenging for developers like me who transitioned to Go after working mostly with an OOP language where one of the most basic concepts is inheritance.

But, over time I started to realize that this approach to polymorphism is just another way of thinking and makes sense because, in the end, the composition is more reliable, evident, and run time is changeable.

Errors handling. It is completely up to you what errors are returned and how they are returned, so as the developer, you need to return the error every time and pass it up accordingly. Unsurprisingly, errors may be hidden which can be a real pain. Remembering to check for errors and pass them up feels annoying and unsafe.

Of course, you can use a linter to check for hidden errors, but that is more of a patch and not a real solution. In Java, exceptions are much more convenient. You don’t even have to add it to a function’s signature if it is a RuntimeException.

No generics. While convenient, generics add complexity and were considered costly to Go creators when it came down to type system and run time. When building in Go you essentially have to repeat yourself for different types or use code generation.

No annotations. While the compile-time annotations can be partially substituted with code generation, unfortunately, the runtime annotations cannot be substituted at all. This makes sense because Go is not declarative and there shouldn’t be any magic in the code.
I liked using annotations in Java though because they made the code much more elegant, simple, and minimalistic.

They would be very useful in providing some aspects or metadata for the HTTP server endpoints with swagger file generation later on. In Go, you currently have to make the swagger file manually or directly or provide special comments for the endpoints functions. This is quite a pain every time you change your API. However, annotations in Java is a kind of magic where people often do not care how they work exactly.

Dependency management in Go. I previously wrote about dependency management in Go using vgo and dep. This is a great synopsis of the issues and describes my biggest issue with Go and honestly, I’m not the only one who feels this way. The Go dependency management landscape has looked rather rocky for some time. Initially, there was no dependency management beyond “Gopgk” but the “Vendor” experiment released eventually which was later replaced with “vgo” and then further replaced with version 1.10, “go mod”. Nowadays the go.mod file descriptor can be changed both manually as well as with various Go commands, like “go get”; this, unfortunately, makes the dependencies unstable.

There is also no mirroring of the sources provided by the dependency management mechanism out of the box. It’s somewhat of a pity especially because Java has awesome declarative tools for dependency management like Maven and Gradle that also serve to build, deploy, and handle other CD/CI purposes. We effectively have to custom-build the dependency management we want using Makefiles, docker-composes, and bash scripts, which only complicates the CD/CI process and stability.

Go microservices often start in containers and end the same time locally and in a virtual Linux machine or at different platforms. Sometimes it makes the CD/CI work for development and production cycles more complicated than it needs to be.

The name of the packages includes the hosting domain name. For example:

This is really strange and especially inconvenient since you cannot replace someone’s implementation with your own implementation without changing imports over the entire projects code base.

In Java, the imports often start with the company name, like:

The difference is that in Go, the go get will go to by.spirascout.public and try to get the resource. In Java, the package and domain names do not have to correlate.

I do hope that all the problems with dependency management are temporary and will be resolved in the most effective way in the future.

Final Thoughts

One of the most interesting aspects of Go is the code naming guidelines it follows. They were based on the psychology of code readability approach (check out this article).

With Go, you can write very clear and maintainable code with separated approaches and while it is a language of many words, it still remains clear and evident.

Working at a Golang web development company has clearly shown me that Go is fast, robust, and easy to understand which makes it very good for small services and concurrent processing. For large, intricate systems, services with more complex functionality, and single-server systems, Java will hold its own among the top programming languages in the world for the time being.

While Java SE has become payable, Go is truly a child of the programming community. Actually, there are a ton of different brand JWMs, but Go’s toolset is the same one.

Bottom line: Different tasks need different tools.

If you are interested in learning more about how we do software development at Spiral Scout, are interested in joining our team, or if you just want to ask me a specific question about this article, please reach out to me at Spiral Scout using the email . I look forward to helping where I can. Best of luck with your own project and don’t forget to keep learning and trying new things!

Go vs Java infographic



Source link