Sunday, 18 June 2017

All about Transmission Technology - Part 1


Why Technology is important for Transmission?  

Consider a network of a large service provider. This provider provides various services including voice, data, video and others. If we consider an operator in India like xyz, they have large fiber connectivity for transmission of data(for the discussion sake let me consider all the services as data). The question comes on which technology the data has to be carried?
By the time we give a thought on this question, another question might arise - why should we worry about technology? Is that required to carry the data ?

Sending the data as 0s and 1s is one thing, But more importantly, we have to think, if any frame format is required for sending the data. If yes, what all the things that frame must contain? How many such 0s and 1s much be there in that frame? How many such frames must be allowed to transmit in one second ? On what factors this number depends on ?


Should we dedicate any space in the frame for protection of the data? should we dedicate any space in the frame for error monitoring, detection, correction,etc. ?

Absolutely yes. We have to consider all these factors because, reliability of the information is of utmost concern to us. We definitely don't want 'Hello' to be transmitted as Hell :)

Now, who will define all these things ? and the answer is of course a protocol or a technology ?

SDH (Synchronous Digital Hierarchy), DWDM ( Dense Wavelength Division Multiplexing) ,PTN ( Packet Transport Network) ,OTN (Optical Transport Network),MPLS (Multi Protocol Label Switching), etc are such transmission technologies.

Consortiums like IETF, IEEE, ITU-T define norms for technology and protocols. Once the protocol is tested up to the satisfaction of the purpose which it has to serve, it is released as a standard technology.

These technologies further evolve over a time based on the real time necessity thereby giving rise to new technology or a standard.

A good example which suits the discussion context is evolution of OTN from DWDM and PTN from SDH.

SDH is the most popularly used transmission technology till now. All operators across the world use SDH/SONET as their transport technology. It is best suited for 2G services where voice is the predominant service

SDH has many advantages like protection, OAM capability, Synchronization, multivendor operability, well defined hierarchy, when compare to its predecessor PDH. It was the choice of every operator of those times.

But implementation of SDH is costly and data transport is not so efficient when we compare it to the asynchronous packet transmission technology. So gradually the transport technology is getting migrated from SDH to packet based transmission popularly known as PTN. PTN is based on MPLS-TP( Multi Protocol Label Switching - Transport Profile).


Increase in huge demand of  services like high speed internet, Video, live TV, etc which are highly data centric is another reason for popularity of packet based transmission networks.

Evolution of OTN from DWDM 

To increase the efficiency of the fibre utilization, wavelength multiplexing technology called as DWDM is used. DWDM layer lies below SDH/PDH/PTN/ IP layers.

In DWDM a single fibre can carry hundred of gigabits per second (up to 1600Gbps in case of 160 wavelengths system) on different wavelengths. Several terabits per second transmission has been achieved in lab conditions with DWDM)

Multiple wavelengths carry different data services simultaneously. For instance, if it is 40 wavelengths system. Different different data rate services are carried by these 40 wavelengths at a time.

There is a lot of scope for improvement in DWDM - specially in defining a frame structure, hierarchy and effective management system. This improvement is brought in OTN which is nothing but Optical Transport Network.

Having set this platform, we shall discuss OTN in detail in the upcoming posts. Till then have a good time!!

Saturday, 13 May 2017

Network Function Virtualization Part 3 - Architecture of NFV

Hello Everyone,

Hope you are doing well. 

Because I had to work on few important tasks last 2-3 weeks, I regret for delay in posting this.  But am sure, this one will connect well with last two articles and gives the clear picture of what NFV is all about. 

In the previous ones we discussed about Hypervisors and the need of NFV . We are discussing about NFV Architecture in this article. 

What all the things required to set up a NFV ? or What does a NFV architecture consists of ?

Let me list them and try to brief their role in NFV environment. 

Hardware Resources 
Basic hardware resources like computing hardware, storage hardware & network hardware are required for setting up an NFV. These are general purpose hardwares with a flexibility of need basis scaling 

-Computing hardware is assumed to be COTS (Commercial of the shelf ) as opposed to purpose specific hardware
-Storage devices can be Network Attached Storage (NAS) or storage that resides on the server itself
-Network resources are comprised of Switching functions.

Virtualization Layer or Virtualization resources 
Virtualization layer decouples the VNFs ( Virtual Network Function - explanation given below)from hardware resources and thus ensures realization of hardware independent network functions 
The main important role of virtualization layer is to provide virtual resources to VNFs.  Typically hypervisors are used in the virualization layer to provide these services .(refer article 1 for details about hypervisors)

Hardware resources along with virtualization layer is called as NFVI or NFV infrastructure (refer figure -2, NFV Architecture).In general NFVI is the totality of hardware and software 
Fig.2- NFV Architecture block diagram

This NFVI can be in one location or can be spread across different locations. It can be of one infrastructure  provider or can belong to multiple providers.

In my small lab, I had used GNS3 as a virtualization software which was installed on high configuration Windows system. On GNS3, IOS of routers were emulated and a virtual topology was built for testing the network management features. This was for one of our client. I will share my experience on this as a separate article.

VNF
VNF is a virtualization of a network function. Let us consider some of the network functions, for example, DHCP servers. DHCP is a dynamic host configuration protocol. DHCP servers dynamically assign IP addresses to network hosts with some predefined rules/conditions. Usually the routers support this DHCP function.
In this context, DHCP function can be realized virtually without the need of dedicated proprietary router hardware.

let me take another example of network function, MME. Mobile Management Entity. MME is an important network element in Evolved Packet Core(EPC) of 3GPP's System Architecture Evolution (SAE)
MME is the main signaling node in EPC. It is responsible for initiating paging, authentication of the mobile device, and location updates (tracking area updates).
With development of network virtualization, the function of MME can be realized virtually on general purpose hardware.

Like this, there are many examples of network functions - S-GW & P-GW of EPC, Firewalls, etc 

EMS
Management software, typically EMS (Element Management System) is needed to manage the network functions i.e. VNFs here.

VIM
Another important element which comprises of NFV architecture is Virtual Infrastructure Manager
Its role is to control and manage the interaction of a VNF with a computing hardware, storage and network resources 
Virtual Infrastructure Manager performs resource management and is incharge of the inventory of software and hardware resources
Allocation of virtualization enablers, increase and decrease of the hardware resources based on the need is done by VIM
It also does operations like Management of NFV infrastructure, root cause analysis of performance issues of NFVI, collection of information for capacity planning, monitoring and optimization.

VNF Manager 
VNF manager is responsible for VNF lifecycle management ( instantiation, update,query,scaling and termination).   A separate VNF Manager may be deployed for each VNFs or one single VNF manager can serve multiple VNFs
Orchestrator 
literall meaning is - to arrange or control of the elements 
Its responsible for creating a network service, scale a network service terminate a network service 
The NFV Management and orchestration is called as NFV-MANO which in total takes care of NFV Management.


Hope the content was informative and could add some value in understanding NFV. Suggestions and feedback are welcome.  Will take some case studies about NFV in the next article.  

Friday, 14 April 2017

Network Function Virtualization Part 2


Hello everyone,

After discussing about hypervisors and their functions in the last post, I am shifting your focus back to NFV.  I am sure you will be appreciating the hypervisors' role in NFV in the next few posts.

In this post, we are discussing about the need & objectives of Network Virtualization.

Why Network Virtualization is required ?

In a network environment there are different network elements configured or installed to serve some specific purpose. For example, Router - its job is to interconnect two different networks and route the packets to different destinations based on the destination IP addresses. Switches - their job is to forward the frames /data to different destinations within a network. Firewall- to block unwanted packets entering into a network and prevent threats from hackers. Servers- to store customer information/data with security and the list continues.

Many of the times, all the above mentioned network elements serve the same purpose throughout their life :) Also, their configurations change very rarely unless there is a major restructuring in business functions.

If a little thought can be given on the functions of these network elements, we understand that we are not using these devices up to their potential :) i.e. Why Router should do the function of only routing? Why Switch only should forward the data ? Why Firewall only is required for security ? , etc.

Strange thoughts Right ?

Yes! These strange thoughts only led to the development of virtualization.

Let me add few more to the list.

Why everyone need dedicated infrastructure ? Can there be a model where a high volume infrastructure is set up and everyone share this infrastructure depending on their requirements ?

Can there be a provision for scaling it up on need basis ? Will it not reduce their CAPEX and OPEX. And more importantly hassle free operations ?

Absolutely Yes!!

Basically NFV aiming is at sharing the common Network Infrastructure and need base utilisation of the network services

Another way of looking at NFV is as follows :

What does these Router or Switch hardware basically consists of ? It is the same things a CPU, Memory /RAM, ROM and an Operating System.

The network functionalities like routing, switching, etc are achieved through either ASICs ( Application Specific Integrated circuits) or Softwares

If an ASIC is used to realize the functionality, then it is hardware specific which means a router hardware can do only routing and not switching.

If the functionality is realized through software, then by changing the software we can change the functionality of the hardware. This means a router hardware can also function as a switch, if that respective software (software of switch) is made run on top it.

So decoupling the network functions from the hardware specific devices is another objective of NFV.

By achieving this objective a general purpose hardware can perform multiple network functions thereby eliminating the need of functional specific proprietary hardware.

Will brief about NFV Framework and Architecture in the next post. Till then have a great time!!

Saturday, 25 March 2017

Network Functions Virtualization(NFV) - Part 1

Network Virtualization is getting popular these days and is very trending technology in network space. Before getting into the concept of NFV, let me say something about hypervisors which play an important role in NFV.

What are these hypervisors ?
Hypervisor is a computer software or firmware on which several virtual machines are run.

Hypervisors create an environment for guest operating systems and make them believe that the hardware resources is dedicated to them.  But in reality there will be only one set of hardware resources and all the guest operating systems (so called virtual machines) will be sharing that hardware with the help of hypervisors.

History :
The term hypervisor was first coined by IBM way long back in 1956.IBM wanted to run an additional operating system on Mainframe computers and there were some limitations with the supervisor code to run multiple instances of the operating system. IBM tackled the problem by adding another layer in the architecture on top of supervisor and that became Hypervisor.

The hypervisor installed on the server hardware(host) controls the guest operating system running on the host machine. Its main job is to cater to the needs of the guest operating system with respect to hardware resources, and effectively manage them so that the instances of multiple operating systems do not interrupt one another.

There are 2 types of Hypervisors:-
Type 1 Hypervisor which directly run on the system hardware. This type of hypervisor is also called as bar metal hypervisor.

Type 2 Hypervisor: This runs on operating system of the host. Above Hypervisor the guest operating system will run.

Friday, 3 March 2017

GPON

What is GPON ?
GPON is an optical network technology used to realise FTTH concept.
Before going to GPON, let me quickly brief about FTTH.


What is FTTH?
FTTH (Fibre to the Home) is a way of delivering the communication signal over optical fibre from operators switching equipment to customer premises (home or business office). In the past this was done with the help of copper infrastructure like Coaxial or twisted pair cables. FTTH replaces the conventional copper to Fibre.
FTTH offers all the services like Voice, Video (Cable TV), High speed Internet from single CPE device.
GPON is a very popular technology used across the world, to realize FTTH solutions.



So What GPON is all about?
GPON stands for Gigabit Passive Optical Network.
ITU-T G.984 defines the standards for GPON

GPON is a successor of ATM based Passive Optical Network (APON) and Broadband based passive optical Network (BPON) which offered 622Mbps of downstream bandwidth and 155Mbps upstream bandwidth.

GPON provides much higher bandwidth compare to its predecessors i.e. 2.448 Gbps downstream bandwidth and 1.244 Gbps upstream bandwidth.


Where GPON is used?
It is used in the access network.


Elements of GPON Network.
GPON Network consists of an central office node OLT (Optical Line Terminal), ONT (Optical Network Terminal) present at customer premises, and a splitter between OLT & ONT

OLT acts as an interface between Service provider network and Customer Equipment.

GPON architecture eliminates the need of active elements between OLT and ONT thereby greatly reducing the cost and making it easier to maintain.


Features of GPON
A single optical fibre can carry multiple customer traffic from OLT to ONT in a TDM fashion.

GPON protocol permits a splitting of large customers split ratios up to 1:128. However, in practice the PONs are deployed with 1:32 split ratio. i.e. 32 customers' traffic is carried in a single fibre.

Both downstream and upstream traffics are carried in same fibre on 2 wavelengths i.e. 1490 nm for downstream traffic and 1310nm for upstream traffic.

The Maximum between OLT and ONT is 60 km



Why GPON is a passive network technology?

Before getting into this, let us know the concept of passive and active network.

Active networks consist of elements that require electrical power for working. Some of the active network elements are switch, router, multiplexers, etc. The customer traffic is managed and directed to the specific destination with the help of these active network elements.

In case of Passive Networks, passive elements like optical splitters are used to separate the customer traffic.

Since GPON uses optical splitters which is a passive element, the technology is called Passive optical Networks.


What is the advantage?
GPON offers low cost solution for high data rate customer application with minimal maintenance.
Quick Snapshot

GPON -                          Gigabit Passive Optical Network
ITU -T Standard -           G.984
Predecessor -                 APON ( ATM based Passive Optical Network) / BPON(Broadband                                          based Passive Optical Network) with 622Mbps of downstream 

                                     and 155Mbps of upstream bandwidth. 
GPON bandwidth -          2.448Gps downstream 
                                     1.244Gbps Upstream
Advancement in GPON-   XGPON( ITU-T G.987) / 10Gbps downstream & 2.5Gbps Upstream                                       WDM PON



Monday, 9 March 2015

Networking for All - Technology Simplified.

Routing  

What is Routing? 
Routing is a process of selecting a path in a network over which packet shall be sent to a destination. OR
The term routing refers to a process of taking a packet from one device and sending it through the network to other device.
 
This routing process is done at Layer 3 of OSI layer model i.e. Network layer.

Is really routing required in Networks? 

Routing is analogous to locating a house. We can imagine how important it is for a Courier Service Company, to locate a destination (house) in order to deliver the shipment.
 
To locate a house, the minimum details required are - State, district/city, locality, street address and house number. Once the house is located, the next task is to deliver the shipment.
In the same way, routing is very much required to locate the host in a network and deliver the information.

Likewise some minimum details are required for a Courier Company to deliver the shipment, router also make use of some address (IP Address) to locate a destination host in a network.

Basically Routing involves 2 important processes - one is to locate a host and the other is to find out a best path to reach it.

Routers really don’t care about the hosts. They only care about the network and the best path to reach that network. The logical address of the destination host is used to locate that host in a network. 

Once the destination host is located, the hardware address of the destination host is used to deliver the packet or information.

Here the hardware address is MAC address and logical address is IP address. We will discuss in detail about MAC and IP address in upcoming posts.

Routing also helps in efficient management of network layer and its resources. It also helps in congestion management.  

If network has no router, then it clearly means that you are not routing.
 
Now there might be a question running in your mind. Some of you may relate this to your working environment also.
“I have no router in my network, but still I can send the information from source host to destination host correctly. How it is possible? “

Yes. In your case, the source and destination are in the same network and the primary function of the router is to locate a network first and then follows location of host. If network is already known, then there is no role of a router in delivering the information.

So, Router helps us in sending a data from one network to another network.

By now, I think we have tuned our understanding about function of router and routing process. 

To route a packet, a router must know
1. Destination IP Address.
2. Neighbor router from which it can learn about remote Network.
3. Possible route to all remote networks.
4. The best route to each network.

Router can route the packet only when it knows the information of neighboring connected routers.

How router learns the information of neighboring connected routers? Whether administrator has to do some manual configuration or protocols take care of them? All these things we will discuss in next post. 

Hope you have enjoyed the reading. Your queries and feedback are welcome.

Have a nice time


Sunday, 8 March 2015

Networking for All - Beginning with Basics

Introduction

What is Network?
A network is a group of two or more computer systems linked together. In general Networking means connectivity. 


Why Networking?
Computers are interconnected for serving different purposes like data sharing, resource sharing, remote operations, etc.

Data Transmission
To transmit a data from one computer to another computer, we may make use of an external storage device like USB pen drive, DVD/CD-ROM or an external hard disk. This is good when the data transmission is not frequent and not dynamic.  Suppose if the data transmission is frequent, then it is very difficult to rely on these external memory devices.

Is it not great if this data transmission can be done at the click of mouse without any movement of external memory devices?

Yes, With the help of networking it can be done.
Connecting these two computers via switches or hubs or modems will help us in transmitting the data from one computer to another very easily. This is data sharing.

The question that might be running in your mind now is what are these Switches, Hubs and Modems?

Do not worry much about these things right now.Let me explain these things in detail later. 

For time being just understand that these are the devices which helps in connecting one computer to another computer.

Now let us continue our discussion of Data transmission or Data sharing.

Data Sharing

Data will be present in server (Server is a computer with high configuration) and all other computers will access the data from the server, process this data and stores back in the server.

Resource Sharing

Suppose, there are 10 computers in an office and there is only one printer. All these computers are spread over different corners of the office. If employees working on these 10 computers need to access the printer, then they have to connect their computer to the printer via a printer cable. They have to do this every time they need a print. Organizations cannot provide printer for every computer and is not a wise thing to do also. But, when these computers are interconnected and are also connected to a printer, irrespective of the position of the computer, employees can access the printer and take the print of their documents at will.

In this case, printer is a resource and with the help of networking, the printer can be shared among all the users. This is resource sharing. With networking, resources can be shared and used efficiently.

Similarly, High configuration servers are shared by low configuration computers for running some applications. This is also the case of resource sharing. Here CPU, RAM, etc of server are the resources.

To summarize, data transmission, data sharing and resource sharing are important reasons for networking.

What are the applications of Networking?
E- commerce (Online shopping, Online ticket booking, Online banking)
Remote operations (Telnet, Remote login, NMS/EMS )
Powerful social medium – Facebook, Twitter, Videoconferencing, email, etc)
Mobile Communication 


What next ? 
Having understood, the need of networking,  the next question is how to connect these computers or is there any pattern or way to connect them? 

Yes, the way of connection determines the topology of the network.  We will discuss about different types of topology, their features, advantages, limitations applications,etc in detail in the next post..

Have a nice time.