Live Streaming (High-Level Design)

Live Streaming Service Primer – DZone Web Dev


Streaming services are gaining popularity day-by-day. Some of the early pioneers in video streaming that we can think of are Netflix and YoutTube (gaining popularity in the mid-2000s). At present, there are plenty of streaming services across the world, and in the next decade or so, cable network television could be a passe. 

Though Netflix doesn’t support live streaming, there are other platforms like YouTube and Facebook that support this feature. This article covers a brief overview of a Live Streaming Service, the various protocols that support streaming, and a high-level system design of the service itself.

Typically, live streaming involves a combination of sophisticated hardware and software components, which would be impossible to cover in this, single article. But, due consideration is given to them in the high-level design so that we can get a feel for the different components/services that are required to build a streaming service.

Note: there are differences between live streaming and a streaming service. Streaming of sport events, public functions, online gaming sessions, etc. fall under live streaming services. Pure content delivery platforms like Netflix, YouTube, Amazon Prime, etc. are Streaming Services. This article covers Live Streaming Services.

Protocol Glossary

Before getting onto the components that build up the live streaming design, we need to know some widely used protocol abbreviations that are used in streaming. For more details, a Wikipedia link is tagged in case you want to understand any specific protocol more in-depth. 

  • RTMP — Real-Time Messaging Protocol is based on TCP and was developed by Macromedia (now owned by Adobe). This protocol was intended to stream audio, video, and data over the internet to a Flash player from a server. With HTML5 providing native support for playing video/audio data, Flash player lost its popularity, and most web browsers are deprecating its support. Further reading about RTMP on wiki. (Most mid-1990s to 2000s internet users would recall the prompts or the pop-ups to install “Adobe Flash Player” to view content!)
  • RTSP — Real-Time Streaming Protocol is based on TCP and was developed by RealNetworks, Netscape, and Columbia University. This protocol internally uses other protocols to transfer media content from the server to the client and vice-versa. Further reading on wiki.
  • SDP — Session Description Protocol was a proposal by the IETF in the late 1990s. SDP does not deliver any media by itself but is used between endpoints for negotiation of media type, format, and all associated properties. Further reading on wiki.
  • WebRTC — This is an open source project that provides web-based, real-time communication (RTC). It allows browsers to capture audio and video data from devices and transfer it to a server using SDP. It provides web browsers and mobile applications with real-time communication using APIs. It eliminates the need to install plugins or download native apps. Further reading on wiki.
  • HLS — HTTP Live Streaming was developed by Apple and is an open standard. It is based on HTTP-based adaptive bit-rate streaming. HLS is universal and supports most devices. HLS creates a playlist (.m38u format), which is an index to chunks of video files (~10-second chunk) of various formats.

    Based on the network quality and bandwidth, the native player automatically requests a different chunk. HLS is widely popular, as it can stream to mobile devices and HTML5-based video players. Further reading on wiki. This protocol has a pretty good history with the iPhone gaining popularity in the mobile market, as Apple didn’t want to rely on Flash or Quicktime players to play content on their phones.

  • DASH — Dynamic Adaptive Streaming over HTTP. This protocol is very similar to HLS and works by breaking content into a sequence of small, HTTP-based file segments. Each segment contains a short interval of playback time of content (such as a movie or any live broadcast of an event). Further reading on wiki.

You may also like:
Real-Time Analytics in the World of Virtual Reality and Live Streaming.

High-Level Design

The high-level design covers various components that build a live streaming service and is shown in the diagram below. Please note, this is not an industry standard. Rather, it gives you an idea of the design aspects of various software and hardware components that help in building a live streaming service.

Live Streaming (High-Level Design)

Live Streaming (High-Level Design)


Publishers form the very first input that generate raw audio and video of the streaming service. Conventionally the main audio, video, and graphics are generated using mics and video-cameras. This data is mainly consumed by encoders (software/hardware-based), which forms the heart of the Publisher.

The primary role of an encoder is to consume the audio, video, and graphics that have to be streamed and convert them into data that can be sent across a network. Hardware Encoders are physical equipment, and they only encode and stream data with high reliability. These encoders can be attached to multiple audio and video devices. An example of an open source software encoder is the OBS — Open Broadcaster Software.

The protocols used for sending encoded data by the publishers are RTMP, RTSP, and SDP (may not be limited to these in reality).

Streaming Components

A Streaming Server receives encoded data from the software/hardware encoders. It creates multiple formats of the stream and can save it locally or re-stream to another service. It also supports multiple protocols. 

The main feature of the Streaming Server is Adapter Bit-rate Streaming. It is a method of video streaming over HTTP where the source content is encoded at multiple bit rates. Each of the different bit rate streams is segmented into smaller parts. 

The segment size can vary between 2 and 10 seconds. First, the client downloads a manifest file that describes the available stream segments and their respective bit rates. During stream start-up, the client usually requests the segments from the lowest bit rate stream. If the client finds that the network throughput is greater than the bit rate of the downloaded segment, then it will request a higher bit rate segment.

The Streaming Server can be directed to create various video formats. For example, if the incoming data has a resolution of 1080p, then the server can be directed to generate different resolutions of the same data, like 720p, 480p, 360p, etc. This allows for adaptive bit-rate streaming.

The Streaming Server can re-stream data to other streaming services like Facebook, YouTube, etc. Typically, when the traffic grows, the server cannot handle all the requests, and it might end up being very slow, resulting in buffering issues at the client end. 

In order to mitigate this and to make it highly scalable, the server can re-stream the content to edge servers or CDN providers, like AWS CloudFront, Akamai, etc. The server can also strategically push content to CDN providers that are geographically located where a majority of the usage is. The protocol supported in this case would be HLS/DASH only as they are based on HTTP and no other protocols are supported by edge servers or CDNs.

Clients (Viewer)

The Clients or Viewers are usually media players in the service provider or any device-specific applications (e.g YouTube/Facebook apps for Android or iOS). To reach every client device, the most popular protocol that is used is HLS. HLS is universally supported and can play on all modern devices. There are some devices that still support RTMP (with embedded flash players), but this is almost reaching (or already reached) an end-of-life.


Each and every component mentioned in the sections Publisher and Streaming Components of the design are very large topics in itself and doesn’t cover under the scope of this article. It is advisable to research the components in detail based on your needs. This article is a primer and gives an overview of live streaming, components, and various protocols used along with Wikipedia references and a high-level system design. I hope you found this helpful!

Further Reading

Source link

Development of Reactive Applications With Quarkus

Development of Reactive Applications With Quarkus

In the context of cloud-native applications, the topic “reactive” becomes more and more important, since more efficient applications can be built and user experiences can be improved. If you want to learn more about reactive functionality in Java applications, read on and try out the code.

Challenges When Getting Started With Reactive Applications

While the topic ‘reactive’ has been around for quite some time, for some developers, it’s not straightforward to get started with reactive applications. One reason is that the term is overloaded and describes different aspects, for example, reactive programming, reactive systems, reactive manifesto, and reactive streams. 

Another reason is that there are several different frameworks that support different functionality and use other terminologies. For example, with my JavaScript background, it wasn’t obvious to figure out the Java counterparts for JavaScript callbacks, promises, or observables. Yet another reason why it can be challenging for some developers to get started is that reactive programming requires a different type of thinking compared to writing imperative code.

There are a lot of resources available to start learning various reactive concepts. When learning new technologies, simple tutorials, samples, and guides help to understand specific functions. In order to understand how to use the functions together, it helps me to look at more complete applications with use cases that come closer to what developers need when building enterprise applications. Because of this, I’ve implemented a sample application that shows various aspects of reactive programming and reactive systems and which can be easily deployed on Kubernetes platforms.

You might also be interested in:
How to Write Reactive Applications With MicroProfile

Architecture of the Sample Application

Rather than reinventing the wheel, I reused the scenario from the cloud-native-starter project, which shows how to develop and operate synchronous microservices that use imperative programming.

The sample comes with a web application that displays links to articles with author information in a simple web application. The web application invokes the web-api service, which implements a backend-for-frontend pattern and invokes the article’s and author’s service. The article’s service stores data in a Postgres database. Messages are sent between the microservices via Kafka. This diagram describes the high-level architecture:

Technologies and Functionality

The sample leverages Quarkus heavily, which is “a Kubernetes Native Java stack […] crafted from the best of breed Java libraries and standards.” Additionally, Eclipse MicroProfile, Eclipse Vert.x, Apache Kafka, PostgreSQL, Eclipse OpenJ9, and Kubernetes are used.

Over the next days, I’ll try to blog about the following functionality:

  • Sending in-memory messages via MicroProfile
  • Sending in-memory messages via Vertx event bus
  • Sending and receiving Kafka messages via MicroProfile
  • Sending Kafka messages via Kafka API
  • Reactive REST endpoints via CompletionStage
  • Reactive REST invocations via Vertx Axle Web Client
  • Reactive REST invocations via MicroProfile REST Client
  • Exception handling in chained reactive invocations
  • Timeouts via CompletableFuture
  • Resiliency of reactive microservices
  • Reactive CRUD operations for Postgres

The sample application demonstrates several scenarios and benefits of reactive applications.

Scenario 1: Reactive Messaging

One benefit of reactive models is the ability to update web applications by sending messages rather than pulling for updates. This is more efficient and improves the user experience.

Articles can be created via a REST API. The web application receives a notification and adds the new article to the page.

This diagram explains the flow:

Scenario 2 – Reactive REST Endpoints for Higher Efficiency

Another benefit of reactive systems and reactive REST endpoints is efficiency. This scenario describes how to use reactive systems and reactive programming to achieve faster response times. Especially in public clouds where costs depend on CPU, RAM and compute durations this model saves money.

The project contains the endpoint ‘/articles’ of the web-api service in two different versions, one uses imperative code, the other one reactive code.

The reactive stack of this sample provides response times that take less than half of the time compared to the imperative stack: Reactive: 793 ms – Imperative: 1956 ms.

Read the documentation for details.

This diagram explains the flow:

This is the result of the imperative version after 30000 invocations:

This is the result of the reactive version after 30000 invocations:

Supported Kubernetes Environments

We have put a lot of effort in making the setup of the sample as easy as possible. For all components and services, there are scripts to deploy and configure everything. For example, if you have Minikube installed, the setup shouldn’t take longer than 10 minutes.

Closing Thoughts

A big thank you goes to Harald Uebele and Thomas Südbröcker for their ongoing support. I especially want to thank Harald for writing the deployment scripts for CodeReady Containers and Thomas for writing the deployment scripts for IBM Cloud Kubernetes Service. Additionally, I want to thank Sebastian Daschner for providing feedback and sending pull requests.

Try out the code yourself!

Further Reading

Reactive Elasticsearch With Quarkus

Reactive Messaging Examples for Quarkus

Source link

Post image

An album art concept for my Album Art Vol. 1. I’m fairly new…

(explanation below)

Post image

Album Art Vol. 1 series is an attempt for me to create album cover designs that are VERY DIFFERENT to each other to also target and get a client (for my Album Art Vol. 2). (because I also love music so much).

Lukewarm Coffee – Album Art Vol. 1 (3/12)

Concept: An album concept taking on an analogy about the feeling of uncertain love. There’s no such thing as a lukewarm coffee, there’s only cold coffee or hot coffee. There’s no “maybe” in love just a “yes” or a “no”. It’s a cafe that serves the doubtful minds.

Used: Maya (Modelling), Cinema 4D (Textures + Rendering), Photoshop (Compositing & Fixes)

I wanted to try Octane since it’s faster and more realistic I guess but I tried the free version on Blender and it needed a lot of learning curve (for me personally, but I already knew how to use 3DS Max and Maya). And Octane license for Cinema4D is super expensive. I ended up using physical render.

you can check out the first 2 album art on my Instagram

Source link

Introductory Guide to Create Your First Android App With And...

Introductory Guide to Create Your First Android App With And…

In this article, we’ll learn how we can begin creating our first Android Application. Without any delay, let us begin with the creation of our first application.


Before getting started with Android, you need to be well-versed with a few important concepts in Android Studio:

  1. You must have a clear understanding of Object-Oriented Programming.
  2. You must know the basics of Java.
  3. You should know the basics of XML.

You may also like:
Introduction to Android Programming in Android Studio 3.0.

Getting Started – Your First Android App

Now, we’ll begin with our first application.

Step 1: Download and Install Android Studio

  1. First of all, make sure you have Android Studio installed and ready in your Computer. 
  2. If you do not have Android Studio, you can follow these Steps to Install Android Studio.
  3. Once you are ready with Android Studio, you can proceed with the steps.

Step 2: Create a New Project

  1. Go to File > New > New Project. After this, the following should be open on your desktop:

Creating a new Android project

Creating a new Android project

   Here, we’ll select Empty Activity and proceed further by clicking on Next.

   2. Now, we’ll name our application and choose the language we prefer. Then, press Finish.

Configuring Android project

Configuring Android project

Step 3

At this point, we should have a screen in front of us that looks something like this: 

Landing screen

Landing screen

There two areas where we’re going to write the code for our application. The two sections are and activity_main.xml. Both files come together to make our application function.

Step 4: Configuring activity_main.xm

Initially, we’ll have the following code in the layout file:

Initial layout of activity_main.xml

Initial layout of activity_main.xml

We can change the message that is written in the line: android:text= “Hello World”. We’ll change it to “Hello World, I’m here to take you to the URL to enter”.

We’ll also add a Button and an EditText component where users can enter a URL. To add this, we’ll add the following code in the activity_main.xml file:

Step 5

Now, we’ll write the following code to implement the onClick() action of the button in the file. The application that we are developing will take us to the respective URL. To implement this we will use Intents – Android Intent and Intent Filters.

Step 6

After writing the code in both XML and Java files, we will now run the application.

The following should be generated:

Final output

Final output

Now, we’ll enter a URL in the EditText, say,, as shown in the following:

Creating a new Android project

As soon as you click on Take Me There, Google should be opened in your browser, as show below:

URL application success

URL application success


Yes, making an Android application in Android Studio is this easy! You just need to have Android Studio installed. The next step would be to create a new project and name it. The rest of the steps are explained in the article. Now, it’s your turn to implement it and learn how interesting it is to create an application.

Further Reading

Source link

Post image

Help with Bootstrap 4 positioning and scaling header element…

I’m looking to create a header as shown in the image below. The elements within the header must scale with the size of the browser window. Note that the element on the left is sticking outside the header. When I resize the window the element moves down or up depending on the size but I don’t want that. I need it to stay on the line of the header bottom. Thanks!

To clarify: if browser window is reduced by 40% ten the elements must shrink by 40% but the left element must stay on the same position relative to the header’s bottom line.

Post image

Source link