r/graphic_design - CMYK Conundrum! Seeking Guidance

CMYK Conundrum! Seeking Guidance : graphic_design

I’m working on finalizing a color palette for a client and I haven’t been able to land on a serviceable CMYK value for most of the colors. Here’s an explanation of what I’m talking about:

r/graphic_design - CMYK Conundrum! Seeking Guidance

On the left, you can see samples from the Adobe Color website and on the right, samples from Adobe Illustrator. 

  • On the top row, the RGB and Hex values are shown: in both spaces, the colors are looking good. 

  • On the bottom row, Illustrator converts the RGB values to those CMYK values (We can see that the right side looks good but then using those values on the website look very off–way too bright).

  • And in the middle row, we’re looking at the Adobe Color CMYK conversion of the RGB values. It looks good on that website but then when I bring them into Illustrator, the muddier/bluer looking color is created.

I have 17 color values that this affects so I’m really unsure of the best path forward. My thinking is:

  1. I could try to determine a CMYK value that works both on that Adobe Color website (and others) and in Illustrator.

  2. I could transition all values to Pantone values.

Looking for any help in this department as I’ve never experienced this drastic of an issue. One clue that I think created this issue: While I was developing the palette, I realized [Illustrator < Edit < Assign Profile] was set to “Display” instead of “sRGB IEC61966-2.1” which it is currently. If it helps, here’s a screenshot of the full palette with Illustrator generated CMYK values noted:

r/graphic_design - CMYK Conundrum! Seeking Guidance

Thanks in advance!

Source link

r/webdev - QUESTION: setting up Less via craco to work with Antd components

setting up Less via craco to work with Antd components : web…

Hey everyone,

I am trying to incorporate Ant design components into a project I am working on for school. I want to be able to style these components via less styles.

I used the craco-less package for my installation. I also have a craco.config.js file set up.

r/webdev - QUESTION: setting up Less via craco to work with Antd components

In my main index.js file I have:

r/webdev - QUESTION: setting up Less via craco to work with Antd components

So this set up works but when I change the import to:

r/webdev - QUESTION: setting up Less via craco to work with Antd components

Then I see the Antd components but I lose their styles.

Any thoughts as too how to remedy this problem? I would like to practice styling with less.

Source link

What Google’s New Page Experience Update Means for Images on...

What Google’s New Page Experience Update Means for Images on…

It’s easy to forget that, as a search engine, Google doesn’t just compare keywords to generate search results. Google knows that if people don’t enjoy their experience on a web page, they won’t stay on the page long enough to consume the content — no matter how relevant it is.

As a result, Google has been experimenting with ways to analyze the user experience of web pages using quantifiable metrics. Factoring these into its search engine rankings, it’s hoped to provide users not only with great, relevant content but with awesome user experiences as well.

Google’s soon-to-be-launched page experience update is a major step in this direction. Website owners with image-heavy websites need to be particularly vigilant to adapt to these changes or risk falling by the wayside. In this article, we’ll talk about everything you need to know regarding this update, and how you can take full advantage of it.

Note: Google introduced their plans for Page Experience in May 2020 and announced in November 2020 that it will begin rolling out in May 2021. However, Google has since delayed their plans for a gradual rollout starting mid-Jun 2021. This was done in order to give website admins time to deal with the shifting conditions brought about by the COVID-19 pandemic first.

Some Background

Before we get into the latest iteration of changes to how Google factors user experience metrics into search engine rankings, let’s get some context. In April 2020, Google made its most pivotal move in this direction yet by introducing a new initiative: core web vitals.

Core web vitals (CWV) were introduced to help web developers deal with the challenges of trying to optimize for search engine rankings using testable metrics – something that’s difficult to do with a highly subjective thing like user experience.

To do this, Google identified three key metrics (what it calls “user-centric performance metrics”). These are:

  1. LCP (Largest Contentful Paint): The largest element above the fold when a web page initially loads. Typically, this is a large featured image or header. How fast the largest content element loads plays a huge role in how fast the user perceives the overall loading speed of the page.
  2. FID (First Input Delay): The time it takes between when a user first interacts with the page and when the main thread is free for the browser to process the event. This can be clicking/tapping a button, link, or interacting with any other dynamic element. Delays when interacting with a page can obviously be frustrating to users which is why keeping FID low is crucial.
  3. Cumulative Layout Shift (CLS): This calculates the visual stability of a page when it first loads. The algorithm takes the size of the elements and the distance they move relevant to the viewport into account. Pages that load with high instability can cause miscues by users, also leading to frustrating situations.

These metrics have evolved from more rudimentary ones that have been in use for some time, such as SI (Speed Index), FCP (First Contentful Paint), TTI (Time-to-interactive), etc.

The reason this is important is because images can play a significant role in how your website’s CWVs score. For example, the LCP is more often than not an above-the-fold image or, at the very least, will have to compete with an image to be loaded first. Images that aren’t correctly used can also negatively impact CLS. Slow-loading images can also impact the FID by adding further delays to the overall rendering of the page.

What’s more, this came on the back of Google’s renewed focus on mobile-first indexing. So, not only are these metrics important for your website, but you have to ensure that your pages score well on mobile devices as well.

It’s clear that, in general, Google is increasingly prioritizing user experience when it comes to search engine rankings. Which brings us to the latest update – Google now plans to incorporate page experience as a ranking factor, starting with an early rollout in mid-June 2021.

So, what is page experience? In short, it’s a ranking signal that combines data from a number of metrics to try and determine how good or bad the user experience of a web page is. It consists of the following factors:

  • Core Web Vitals: Using the same, unchanged, core web vitals. Google has established guidelines and recommended rankings that you can find here. You need an overall “good” CWV rating to qualify for a “good” page experience score.
  • Mobile Usability: A URL must have no mobile usability errors to qualify for a “good” page experience score.
  • Security Issues: Any flagged security issues will disqualify websites.
  • HTTPS: Pages must be served via HTTPS to qualify.
  • Ad Experience: Measures to what degree ads negatively affect the user experience on your web page, for example, by being intrusive, distracting, etc.

As part of this change, Google announced its intention to include a visual indicator, or badge, that highlights web pages that have passed its page experience criteria. This will be similar to previous badges the search engine has used to promote AMP (Accelerated Mobile Pages) or mobile-friendly pages.

This official recognition will give high-performing web pages a massive advantage in the highly competitive arena that is Google’s SERPs. This visual cue will undoubtedly boost CTRs and organic traffic, especially for sites that already rank well. This feature may drop as soon as May if it passes its current trial phase.

Another bit of good news for non-AMP users is that all pages will now become eligible for Top Stories in both the browser and Google News app. Although Google states that pages can qualify for Top Stories “irrespective of its Core Web Vitals score or page experience status,” it’s hard to imagine this not playing a role for eligibility now or down the line.

Key Takeaway: What Does This Mean For Images on Your Website?

Google noted a 70% surge in consumer usage of their Lighthouse and PageSpeed Insight tools, showing that website owners are catching up on the importance of optimizing their pages. This means that standards will only become higher and higher when competing with other websites for search engine rankings.

Google has reaffirmed that, despite these changes, content is still king. However, content is more than just the text on your pages, and truly engaging and user-friendly content also consists of thoughtfully used media, the majority of which will likely be images.

With the proposed page experience badges and Top Stories eligibility up for grabs, the stakes have never been higher to rank highly with the Google Search algorithm. You need every advantage that you can get. And, as I’m about to show, optimizing your image assets can have a tangible effect on scoring well according to these metrics.

What Can You Do To Keep Up?

Before I propose my solution to help you optimize image assets for core web vitals, let’s look at why images are often detrimental to performance:

  • Images bloat the overall size of your website pages, especially if the images are unoptimized (i.e. uncompressed, not properly sized, etc.)
  • Images need to be responsive to different devices. You need much smaller image sizes to maintain the same visual quality on smaller screens.
  • Different contexts (browsers, OSs, etc.) have different formats for optimally rendering images. However, most images are still used in .JPG/.PNG format.
  • Website owners don’t always know about the best practices associated with using images on website pages, such as always explicitly specifying width/height attributes.

Using conventional methods, it can take a lot of blood, sweat, and tears to tackle these issues. Most solutions, such as manually editing images and hard-coding responsive syntax have inherent issues with scalability, the ability to easily update/adjust to changes, and bloat your development pipeline.

To optimize your image assets, particularly with a focus on improving CWVs, you need to:

  • Reduce image payloads
  • Implement effective caching
  • Speed up delivery
  • Transform images into optimal next-gen formats
  • Ensure images are responsive
  • Implement run-time logic to apply the optimal setting in different contexts

Luckily, there is a class of tools designed specifically to solve these challenges and provide these solutions — image CDNs. Particularly, I want to focus on ImageEngine which has consistently outperformed other CDNs on page performance tests I’ve conducted.

ImageEngine is an intelligent, device-aware image CDN that you can use to serve your website images (including GIFs). ImageEngine uses WURFL device detection to analyze the context your website pages are accessed from (device, screen size, DPI, OS, browser, etc.) and optimize your image assets accordingly. Based on these criteria, it can optimize images by intelligently resizing, reformatting, and compressing them.

It’s a completely automatic, set-it-and-forget-it solution that requires little to no intervention once it’s set up. The CDN has over 20 global PoPs with the logic built into the edge servers for faster across different regions. ImageEngine claims to achieve cache-hit ratios of as high as 98%+ as well as reduce image payloads by 75%+.

Step-by-Step Test + How to Use ImageEngine to Improve Page Experience

To illustrate the difference using an image CDN like ImageEngine can make, I’ll show you a practical test.

First, let’s take a look at how a page with a massive amount of image content scores using PageSpeed Insights. It’s a simple page, but consists of a large number of high-quality images with some interactive elements, such as buttons and links as well as text.

FID is unique because it relies on data from real-world interactions users have with your website. As a result, FID can only be collected “in the field.” If you have a website with enough traffic, you can get the FID by generating a Page Experience Report in the Google Console.

However, for lab results, from tools like Lighthouse or PageSpeed Insights, we can surmise the impact of blocking resources by looking at TTI and TBT.

Oh, yes, and I’ll also be focussing on the results of a mobile audit for a number of reasons:

  1. Google themselves are prioritizing mobile signals and mobile-first indexing
  2. Optimizing web pages and images assets are often most challenging for mobile devices/general responsiveness
  3. It provides the opportunity to show the maximum improvement a image CDN can provide

With that in mind, here are the results for our page:

So, as you can see, we have some issues. Helpfully, PageSpeed Insights flags the two CWVs present, LCP and CLS. As you can see, because of the huge image payload (roughly 35 MB), we have a ridiculous LCP of nearly 1 minute.

Because of the straightforward layout and the fact that I did explicitly give images width and height attributes, our page happened to be stable with a 0 CLS. However, it’s important to realize that slow loading images can also impact the perceived stability of your pages. So, even if you can’t directly improve on CLS, the faster sizable elements such as images load, the better the overall experience for real-world users.

TTI and TBT were also sub-par. It will take at least two  seconds from the moment the first element appears on the page until when the page can start to become interactive.

As you can see from the opportunities for improvement and diagnostics, almost all issues were image-related:

Setting Up ImageEngine and Testing the Results

Ok, so now it’s time to add ImageEngine into the mix and see how it improves performance metrics on the same page.

Setting up ImageEngine on nearly any website is relatively straightforward. First, go to ImageEngine.io and signup for a free trial. Just follow the simple 3-step signup process where you will need to:

  1. provide the website you want to optimize, 
  2. the web location where your images are stored, and then 
  3. copy the delivery address ImageEngine assigns to you.

The latter will be in the format of {random string}.cdn.imgeng.in but can be updated from within the ImageEngine dashboard.

To serve images via this domain, simply go back to your website markup and update the <img> src attributes. For example:


<img src=”mywebsite.com/images/header-image.jpg”/>


<img src=”myimages.cdn.imgeng.in/images/header-image.jpg”/>

That’s all you need to do. ImageEngine will now automatically pull your images and dynamically optimize them for best results when visitors view your website pages. You can check the official integration guides in the documentation on how to use ImageEngine with Magento, Shopify, Drupal, and more. There is also an official WordPress plugin.

Here’s the results for my ImageEngine test page once it’s set up:

As you can see, the results are nearly flawless. All metrics were improved, scoring in the green – even Speed Index and LCP which were significantly affected by the large images.

As a result, there were no more opportunities for improvement. And, as you can see, ImageEngine reduced the total page payload to 968 kB, cutting down image content by roughly 90%:


To some extent, it’s more of the same from Google who has consistently been moving in a mobile direction and employing a growing list of metrics to hone in on the best possible “page experience” for its search engine users. Along with reaffirming their move in this direction, Google stated that they will continue to test and revise their list of signals.

Other metrics that can be surfaced in their tools, such as TTFB, TTI, FCP, TBT, or possibly entirely new metrics may play even larger roles in future updates.

Finding solutions that help you score highly for these metrics now and in the future is key to staying ahead in this highly competitive environment. While image optimization is just one facet, it can have major implications, especially for image-heavy sites.

An image CDN like ImageEngine can solve almost all issues related to image content, with minimal time and effort as well as future proof your website against future updates.

Source link

WebRTC Meeting

WebRTC Use Cases, Challenges, and Trends

WebRTC Meeting

What is WebRTC?

An open-source project released by Google in 2011, WebRTC provides API-based communication between web browsers and mobile applications, including transmissions of audio, video, and data. Eliminating the need for native plugins and app installations makes these connections user-friendly and supported by all the major browsers and mobile operating systems.

The adoption of WebRTC in the tech community has grown dramatically in the past few years. Facebook, Amazon, and Google are among the significant technology companies that implemented WebRTC to make their web applications faster, reliable, and more secure.

WebRTC features are also provided in off-the-shelf solutions that can be easily integrated with other software. A good example is OpenTok, a PaaS for live communications, courtesy of our business partners at former TokBox (now Vonage). We successfully used it in many solutions for our clients, including an advanced authentication service based on biometric techniques.

As was already mentioned in the Summary, the key characteristic of WebRTC is that it is simple yet complex technology. The essence of simplicity comes down to the ease of implementation. It’s enough to use five to ten lines of code to organize peer-to-peer video communication between two browsers. The complexity of the technology is related to the specificity of WebRTC, which must be adapted to different browsers, and to the fact that it is hard to configure if it doesn’t work correctly. Also, to obtain the desired result, you should be aware of STUN, TURN, and NAT.         

  • STUN is a standardized set of methods, including a network protocol, for traversal of network address translator (NAT) gateways in applications of real-time voice, video, messaging, and other interactive communications. Why do we need it?
  • STUN is mandatory when we need to connect two browsers that do not have external IP addresses. Both connect to servers and find out their IP. Browsers exchange these ports through which they relate to each other.
  • TURN does almost the same thing. It sends traffic through itself. This traffic isn’t being modified or changed in any way. Such an approach allows us to connect two points while working over TCP (more reliable but slower protocol than UDP). It is noteworthy that about 15% of calls cannot be made without TURN.

Now, that you know what WebRTC is, let’s plunge into history to understand when and how the technology appeared, and in which cases it can be used. Also, we’ll overview the pros and cons of the technology, examples of WebRTC solutions, and high-demanded WebRTC apps. By default, these applications are based on peer-to-peer communication. If we need to organize group calls and live streaming, it’s mandatory to use a server that operates as a protocol client.     

How Does WebRTC Work?

The primary focus of WebRTC is to provide real-time audio and video communication between participants, who use web browsers to start conversations, locating each other, and bypassing firewalls.

WebRTC utilizes JavaScript APIs and HTML5, being embedded within a browser. The typical features of a WebRTC application are as follows:

  • Send and receive streaming audio and video.
  • Retrieve network configuration data, e.g., IP addresses, application ports, firewalls, and NATs (network address translators), which are needed to send and receive data to another client using the WebRTC API
  • Open/close connections and report errors.
  • Transmit media data, e.g., image resolution and video codecs

 How WebRTC Works

To send and receive streams of data, WebRTC provides the following APIs that can be used in web applications:

  • RTCPeerConnection for audio and video transmissions, encryption, and bandwidth configuration
  • RTCDataChannel for transmission of generic data
  • MediaStream for access to multimedia data streams from such devices as digital cameras, webcams, microphones, or shared desktops

A set of standards for the use of WebRTC in software is currently being developed by the Internet Engineering Task Force and the Web Real-Time Communications Working Group.

WebRTC Under the Hood

WebRTC is primarily just a way to send and receive UDP packages inside browsers. Also, WebRTC knows about the transfer of media—both audio and video, and it can connect two clients directly— peer-to-peer. Developers admit that under the hood, WebRTC is a fairly simple thing: open the UDP port, know the partner’s IP port, wrap the traffic in RTP.

Let’s talk about what happens between the capture from the camera and the video playback on the screen. This process consists of 7 basic steps:

1. Capture of Camera

The browser has an API that allows us to ask users for access to the camera or microphone—navigator.getUserMedia => MediaStream. The main difficulty is that we can’t immediately send media streams to the interlocutor because they weigh a lot without compression. For example, one image 640×480 in format BMP weights 1.2 Mb. The number of such pictures per second is 30. It means that one second of the video weighs 36 Mb. Therefore, the bit rate will be 288 Mbps. Data must be compressed for transfer. So the next step—coding—is mandatory.

2. Coding

In simple terms, codecs allow the compression of audio and video streams. There is a broad set of such codecs, and part of them are available in WebRTC. Let’s take VP9 as an example. This codec is being used for coding images in WebRTC. It can transmit images with resolution 1280×720, compressing them so that 30 frames weigh 1.5 Mbps. How can VP9 do this?

Instead of constantly sending information about the images, VP9 differentiates between the two images. We get the mainframe at the output, while other interframes represent differences from the mainframe. More actions in the frame mean more image weight.

On the basis stage, the keyframe with information about all pixels is determined, and interframes represent the differentiation in comparison with previous states. If we lose in the chain of interframes at least once, we cannot draw other interframes.                                

3. Packing in RTP

Data is packed in RTP—Real-time Transport Protocol, which contains information about the order of the packages. It’s a mandatory step because packages can come in a different order or even be lost. We need the number of packages to reproduce them in the correct order. Also, RTP stores information about the time that allows synchronization of audio and video tracks. Additional details of RTP have a small overhead of about 5%.

There is an extension of the primary protocol named RTCP. It serves to exchange information about lost packages and statistics of their receiving.              

4. Network Transmission Over UDP

Data is being sent as a formed UDP package. If we compare UDP and TCP, the main advantage will be a minimal interval between packages. UDP has a few disadvantages: packages are being lost, arrive late, and end up in the wrong order.        

5. Unpacking RTP

The order of packages is restored at this stage. The video traffic is received and transmitted to the decoder.  

6. Decoding

Data is being sent in the correct order, and at the output, we get a pure video stream—MediaStream.

7. Drawing on the Screen

We attach the stream to the video element and get the image. During peer-to-peer communication between two browsers, sometimes you will notice that the video is covered with squares or freezes. The reason is the loss of the packages caused by different problems:

  •  Random loss or Lossy network (in simple words, part of packages are left in the house walls).
  •  Packages can be dropped by mistake (bugs in the OS or network equipment).
  •  Network congestion.

WebRTC - the way of the frame

To achieve stable video communication, we need to bypass package loss. Four main solutions help to implement it:

1. Jitter Buffer

We render one RTT later. We may request the missing package. In the case of a massive loss, the frieze is shorter because there is more time to request a keyframe. The main minus of such an approach is the additional constant delay.  

2. Decrease the Bitrate

Bitrate = FPS * quality * resolution

We can manipulate bitrate by changing any of these parameters.    

3. Forward Error Correction

The codec duplicates some data. When the data is sent to the client, there are certain duplicates. These can exacerbate network congestion, but we have a higher chance of delivering content the first time. 

4. Network Tuning

  • The best network routes (we can design networks to make the routes optimal, and the media server is selected according to the principle of the minor ping amount).
  • Setting up servers and routers.

Pros and Cons of WebRTC Technology

The main advantages of WebRTC are:

  1. There are implementations for all platforms.
  2. Using modern audio and video codecs promotes high-quality communication.   
  3. Secure and encrypted DTLS and SRTP connections.
  4. There is a built-in mechanism of content grabbing (desktop sharing).
  5. P2P = End-to-end encryption.
  6. Browsers agree directly.
  7. The flexibility of implementation of management interface based on HTML5 and JavaScript.
  8. Open-source.
  9. Versatility: a standard-based application works well on any OS as long as the browser supports WebRTC.

The conditional disadvantage of WebRTC is the high price of its maintenance, which is connected to the need for powerful servers.    

Business Use Cases and Examples of WebRTC 

As was already mentioned in the article, the basis for Web Real-Time Communication is video chat. Services with audio and video calls, data sharing are the primary types of applications involving WebRTC technologies, the most famous examples being WhatsApp, Google Hangouts, and Facebook Messenger. But if we piece all business cases and examples of WebRTC together, we can find out that there are many areas of use. 

The technology is highly demanded in telehealth, surveillance and remote monitoring, online education, Internet of Things, virtual reality gaming, streaming, online games with voice communications, betting, emergency response, etc. 

MobiDev has repeatedly faced the need to apply WebRTC in different niches. One of the most notable use cases is remote assistance via shared AR and WebRTC. The two-way connection is organized here thanks to WebRTC. It is being used for peer-to-peer communication and helps to avoid server overload. The essence of the case itself boils down to the fact that two-way communication in real-time with AR helps to solve tasks with assistance in many areas. 

The simplest example is the repair and maintenance of any equipment. In this case, WebRTC app development is combined with our experience working with Augmented Reality.

The Future of WebRTC: Trends and Predictions

According to Market Study Report, the global WebRTC market’s size is predicted to reach $16,570.5 million in 2026. Let us recall that in 2016 the worldwide market value of products using WebRTC was $10.7 billion. The turning point for WebRTC came in 2017 when Microsoft Edge and iOS Safari 11 began supporting it.     

In terms of global coverage, the WebRTC market spans North America, Europe, Asia, the Middle East, South America, and Africa. It is expected to remain the dominant region, owing to easy access to high-speed internet and the massive number of mobile device owners.

Nowadays, Google puts great efforts into the development of Web Real-Time Communication. Therefore, the future of WebRTC can be cloudless. It is easy to verify this by evaluating Google’s investments in the technology. All of them are directed to the code optimization and expansion or improvement of the feature set.    

The main trends related to WebRTC in 2021-2022 are:

  1. WebRTC, which is known as a W3C standard, will develop rapidly.
  2. The meeting sizes provided by WebRTC will grow, and that influences the complexity of solutions. Notably, 1000 users in the meeting is a real challenge that needs a new architecture. 
  3. Additional tools like background blur and noise suppression were already developed and will be improved in the future, and these tools are connected to the implementation of WebRTC in Chrome. The Pandemic triggered their boom.
  4. A great deal of activity connected to user privacy and application security will be done.
  5. Codecs VP9 and AV1 will be modernized.

The future of WebRTC is associated with the emergence of technology in new markets. Furthermore, as long as WebRTC is a W3C standard, anybody can influence its development, which implies great prospects.

Source link

r/web_design - Grid, Flexbox, or Floats?

Grid, Flexbox, or Floats? : web_design

Suppose I have the following page design:

r/web_design - Grid, Flexbox, or Floats?

Basically they’re all vertical cards that are arranged next to eachother, with animations (Sliding from left to right) when a card opens.

Which CSS technique is the best to achieve this layout? Are there any examples of layouts similar to this? I’m having a hard time planning on how to do the layout.

Keeping in mind I don’t have to support older browsers.

Source link

r/web_design - UX Question: How to not stack cards

How to not stack cards : web_design


I’m a full-stack dev with 0 design experience or education. I’m trying to design some pages out for my startup in figma.

There are probably a lot of things wrong with this page, and I suppose you can feel free to pick it apart and let me know why this is a bad workflow, but I have a specific question around the ‘Scheduled Actions’ block.

I like the gray block around all of the business logic on the page. I feel like that’s a thing –I’d love to be educated on how to separate ‘this is where you’re working’ from ‘this is navigating you’ if it is not a thing. But with the colored background, I feel like I have to put every piece into a card. With Scheduled actions, I feel like each action should be it’s own card, and hence I’m getting into stacked cards. Maybe these could be grouped without a card and just a flat separation line but I think it’s better for it to be a card.

Anyways, I don’t think stacked cards are right, so I’d love to know a better approach to this need. Trying to stick to solutions that are available in default Material packages (Vuetify, to be specific)

Also, I couldn’t figure out how to google questions like this (I searched things like ‘how to structure input heavy pages’ and got firstname, lastname etc fields with a look and feel from 2008 from every link’) so if you can point me towards resources that would answer these questions on a more general level that’d be much appreciated.

PS: Any example websites that have a similar feel to this would be much appreciated (i.e. stepper to take you through a lot of input/setup). I couldn’t think of any spots on my normal consumer-facing apps that I use where this exists, but it seems pretty standard for B2B SaaS stuff.

r/web_design - UX Question: How to not stack cards

Source link

Geospatial Technology Cover

It’s About Location: Developers Draw on Geospatial Tech One …

Apps that give users a good reason to keep coming back tend to operate with up-to-the-minute data to do everything from guiding a drone to tracking a global health pandemic’s path. That data is increasingly location-based, be it maps, demographics, routing, or geocoding. A developer might only need one or two of these location services to give users what they want, and that’s where pay-as-you-go location services have entered the market.

Geospatial Technology Cover

As location data is increasingly necessary for in-demand apps, developers at some of the most innovative businesses are already using PaaS to take advantage of location data.

Here are 6 new ways:

1. Basemaps

Cartographers and engineers have created and curated a vast library of ready-made maps with rich, authoritative data. Developers can integrate this data into applications with just a couple lines of code. Maps made with a neutral background and rich foreground emphasize human geography like streets as well as topography and blended elevation data. Developers can also customize maps with colors, patterns, and labels.

2. Volumes of Data

Developers can integrate demographic and statistical data such as income, spending, market segmentation, and psychographic data into their apps. A GeoEnrichment service helps analyze user-defined study areas and sites around the world for additional location-based context. This includes data describing people, places, and businesses.

For adding map layers made from global data, developers are connecting their apps to the rich data collections that include thousands of options such as imagery, demographics, political boundaries, deep learning, models, and indicators of the planet.

Real-time live feeds such as traffic, current weather, and information about recent events such as flooding, wildfires, and other natural calamities are also available, as are high-resolution images which include historical imagery, to visualize change over time and perform analysis.

3. Data Visualization

Apps that include 2D and 3D data-driven maps help users discover unique patterns and relationships. Developers can build apps that include models of buildings, landscapes, cities, or the entire globe. They can also apply smart mapping to both 2D and 3D data with just a few lines of code to cut down development time.

4. Geocoding and Search

Often an app needs to be able to search a location by name or by multiple addresses. Location services can accurately display the results on a map, including getting textual descriptions such as the nearest address, intersection, or place-name for coordinates.

5. Routing and Directions

Developers call on location services to build apps that can find routes and generate turn-by-turn directions. More advanced apps must be able to perform intelligent network analysis while applying real-world constraints such as traffic, U-turns, road barriers, incidents, and maximum permitted vehicle height.

In addition to building in point-to-point routing, a developer can make an app that routes multiple vehicles, determining which stops should be serviced by each route and in what sequence the stops should be visited. The app would also be used to map service areas to determine which locations can be reached within a given time and distance.

This type of location service can also help a company find the best places to do business, identify the closest facilities to minimize travel costs and create an origin-destination cost matrix to determine the least costly paths.

6. Spatial Analytics

Spatial analytics tools support people who need to see patterns, trends, and other relationships in their data. They provide a highly interactive experience for the user and help scale massive amounts of data for the app developer. This includes big data, real-time analytics, advanced spatial tools, machine learning, and deep learning capabilities.

Businesses Developing a Location Edge

Two startups have recently showcased the value of location services and the flexibility they offer to developers who want to take advantage of location-based data.

Drone pilots, for one, face a fragmented set of strict rules about the locations they can fly because of federal and local air restrictions inside the United States. Developers at startup Airspace Link set out to make flying a more seamless process for its users. With routing and direction data to accurately pinpoint a pilot’s location, flight path, and related limits, the Airspace Link app helps drone operators send a final flight plan to the US Federal Aviation Administration for authorization.

Developers at Geospark Analytics haven’t needed to trace individual travel paths but are trying to help users observe events in real-time and assess risks. Customers have come to rely on the company to make decisions based on what is happening globally. Early in the pandemic, it tracked a developing health crisis in China based on what were reported to be pneumonia cases. The app harvests a trove of publicly available data from social media, news outlets, weather, and economic sources, adding geotags along the way, as its machine learning models help make sense of it all. The company’s use of spatial analytics services allows its users — many in government, defense, intelligence, and Fortune 100 companies — to visualize the risks in the app.

Both Airspace Link and Geospark Analytics built their apps using PaaS, which allows them, and all developers, to integrate only what they need using their APIs of choice.

Source link