Thursday 27 December 2012

ATM and Gigabit Ethernet


By Raul Bernardino
Introduction:
Nowadays, the network technologies are growing too fast including several new applications developments. Some of the new applications are the videoconferencing, the collaborative of the works, on demand videos, and etc.

In other end, not all countries have minimum requirement of the network infrastructure implement video conferencing.  It is important to know what available in the country.

The fast growing of the applications and the increasing of the using of the bandwidth, including the commercial growth networks, they will be impacting direct and indirect to the cost of upgrading bandwidth and also trigger new ideas on the implementation of the country or nation network infrastructures’. For instance, in European TEIN3 which stands Trans-EuroAsia Information Network is providing internet high capacity and dedicates networks for education and research community for Europe throughout the Asia Pacific. GEANT2 is academics internet in the Europe to serve the researchers with multi domains in 34 European countries. GDLN stands for Global Development Learning Network which initiate by World Bank in mid year of 2000. The GDLN facilitate interactive learning process via high speed videoconferencing.

The ATM:
The ATM is a shorter of the Asynchronous Transfer Mode. The ATM networks have high performances and speeds for on demand video and images transport in local and wide area networks. The ATM is also multiplexing and switching technologies and the platform for distance learning, ecommerce, and e-government. The ATM is very flexible to accommodate any array topologies, any applications and any services. This platform is enabling fixed size of 53 bytes packets of multimedia transmission. It is called cells in the network environment starting from desk to global range of the implementation. This 53 bytes are divided in two parts such as 5 bytes ATM cell header and 48 bytes payload as it shows in the below diagram.

The ATM in 1990s the standard speed is 155 Mbps to 622 Mbps. The goal of the ATM are Integrated, end to end transport of the voice, video and data whereas to meet the quality of the service, packet switching, and next telephony generation.     
The ATM Architecture has adaptation layer, ATM layer, and Physical layer as it shows in the below diagram.

The ATM vision:  end to end transport. However, in the reality the ATM uses IP as backbone router and ATM as link layer in which it shows in the below diagram.


Gigabits Ethernet:
The Ethernet protocols are referring to LAN families. The IEEE 802.3 is based standard for Gigabit Ethernet. There are two versions of the Ethernet gigabits as follows:
  • The IEEE 802.3z or 1000Base-x is defining as Ethernet gigabit over the fiber and the cable. There are two types namely 1000base-sx which for short wave up to 500 meters while 1000base-LX the wave coverage up to 5 Km.
  • The IEEE 802.3ab or 1000base-T is defining Ethernet gigabit which over UTP wires. The 1000base-t coverage up 75 meters.


The gigabit interface converters allow networks administrator for configuring each ports base on short waves, long waves, long haul, and copper interface. The long haul is a single mode fiber in which the distance from 5 km to 10 km.
The Ethernet architecture as it shows in the below diagram.

  
The Ethernet gigabit is preserve for CSMA/CD. It also supports half and full duplex. The minimum frame size is 416 bytes for the 1000base-x while minimum frame size for the 1000base-T is 520 bytes.


References list:
1.     Kurose, J.F. & Ross, K.W. (2010) Computer Networking: A Top-Down Approach. 5th ed. Boston: Addison Wesley
2.     University of Liverpool/Laureate Online Education (2011) Lecture notes from Computer Networking Module Seminar 5 [Online]. Available from: University of Liverpool/Laureate Online Education VLE (Accessed: 2 September 2011)
3.     Dep. Elettronica Informatica e Sistemistica (DEIS), University of Bologna: QoS and Multiprotocol Label Switching Experiments for the Design of an ATM-based National Network, [Online]. Available from: http://www.cnaf.infn.it/~ferrari/inet98/index.htm   (Accessed: 31 August 2011)
5.     Asynchronous Transfer Mode (ATM) Networks, [Online]. Available from:  http://forums.techarena.in/guides-tutorials/5186.htm (Accessed: 31 August 2011)




Network and Internet Technology Quality of the Services


By Raul Bernardino
Introduction:
Nowadays, the internet technology and communication becomes a trend of the globalization in which enabling us to communicate, to share information, and to search necessary information from anywhere and at anytime with the no limitation. The development of the networking technology brings internet communication throughout the world, including reaching the very remote and isolated areas. Yet, the exiting of the internet is to share knowledge and the best practices around the world in order to improve the life of the communities and the society through text files, emails, audio, and video streaming or in the real time interaction with the audio conferencing and videoconferencing. Therefore, we have to manage resources such networking, traffics, and internet bandwidth in order to meet quality and best service delivery.  

The data transmissions over networking are sampling the data packets in to the frames and with best effort of the internet protocols will send through communication media such switches and router in order to reach the destination. The best efforts which are the routers default here is still not guarantee that the deliveries are reaching the proper destination within time frames. Therefore we have to overcome this best efforts limitation such as packet lost and jitter including end to end delays.
The packet loss is one data-gram packet that loss during transmission from the source to the destination because networks are overloaded.  They (the packets) are retransmitted after checked in destination packets. The retransmitted are not the case for the multimedia.

Packet Jitter is one of transmission datagram packets takes long time to reach the destination and also have variation on the delays. Again this is not case for multimedia streaming in which it has to be in fixed time frame.

End to end delay is one of the delays that accumulate from prorogation delays that have happened in the links, delay in queuing processes, and end system processes.  The voice over internet protocols (VoIP) has a certain fix delay in order to avoid the disturbing conversation.

The diagram below shows on how data and voice are sent over the networks:


One reason why, we have to overcome the best efforts as mentioned above is that the multimedia traffic requirement is delivering packets in the fix time frames.

The real time conversation for the “multiple interactions the delays are acceptable between 150-400 milliseconds. Less than 150 milliseconds cannot hear by human listener. In other end the delays that are more than 400 milliseconds will make frustrated,” Kurose J. and Ross K., (2010, P. 627).

Therefore, the QoS comes to play the rules. The QoS is standing for Quality of Service whereas an architectural component that can be added in to the current infrastructure of the IP. The architectural components as follows:

The classification of the packets: The nature of the applications that transmitted over the networks needs certain type of the services. Therefore we can classify each application based on the traffic on the network and type services that needed. With the classifications of the applications, the packets can be marked and allow the router to distinguish the priority of the services for the delay sensitivities. For instance configure router with new policy to treat packets accordingly as it shows in the below diagram:



The scheduling, policing, and isolation: Based on the class marking of the each packet, we have done the isolation or distinguished the packets among others. In other end it is also using schedule to determining packet arrives in the network. For instance using FIFO, priorities, and round Rubin scheduling as it shows in the below diagrams: FIFO Schedule:

 Priority Schedule:


Round Rubin Schedule:

The high utilization of the resource:  is to configure the router to reuse the allocation bandwidth for certain classes that is not in use. This is to maximize the use of the bandwidth as efficient and effective, as it shows in below diagram where it allocates 1.Mbps for voice and video while data bandwidth allocation is 0.5 Mbps.





The admission call:  The calls have to be declared before it use. Because if it is exceeded the available bandwidth, it has to block the call and give the busy signals. Therefore, it is needed to have a flow declare before using the network, as it shows in the below diagram which is one call at the time.




These are four components above are called four pillars.

In my own opinion scheduling, policing, and call admission can be combined in the QoS configuration. However, it is depending on how do we prioritize them, FIFO’s or round Rubin’s base.
  
References list:
1.     Kurose, J.F. & Ross, K.W. (2010) Computer Networking: A Top-Down Approach. 5th ed. Boston: Addison Wesley
2.     University of Liverpool/Laureate Online Education (2011) Lecture notes from Computer Networking Module Seminar 6 [Online]. Available from: University of Liverpool/Laureate Online Education VLE (Accessed: 9 September 2011)
3.       Bharadwaj, P (March 2005): Quality of Service in the Internet, [Online]. Available from: http://www.ias.ac.in/resonance/Mar2005/pdf/Mar2005p57-70.pdf  (Accessed: 9 September 2011)

Tuesday 25 December 2012

The Internet Multicast


By Raul Bernardino
Introduction:
In today technology we have two options in sending packets over internet simultaneously. They are broadcasting and repeating the transmission. The broadcasting will be higher cost because we have to invest in the bandwidths. It is real-time broadcasting. If we compare to the traditional broadcasting it will count have many recipients and geographical location or the recipients. The repeating transmission will be lower cost because it is transmitted to the target and as it is needed.

“The broadcasting routing in the network will be transmitting from one source node to the rest of the nodes in the network. While, the multicasting routing is sending a copy of the packet from one node to the rest of the nodes”, Kurose J.F., Ross K.W., (2010, P.433).

The broadcast routing algorithms is transmitting N copy of the packet to N destination numbers. The packet duplication number as per destination number. However it is inefficient because not all notes are willing to receive the packets. And it is uncontrolled network flooding.

The multicast routing is only transmitting to a subnet of the network nodes. This is requiring packets to deliver to the nodes as follows: One or more sender sends the packets to the destination group of the receivers. For instance developers update software update to the users, streaming multimedia such as audio and video. Here we face to issues: How we identifier the receiver and how to address the packets sent to the receivers. In the case of the uni-cast the data-grams identified users with IP source and IP destination in the data-grams. In the case of the broadcast it broadcast from one single point to every node in the networks so it does not need destination address.
How do multicast works? The multicast packets are using addresses of the indirection. That is a single address to identify recipients group to send all packets to the group members. In the internet address the single identifier is class D which is multicasting IP address. The group op address that classifies in the class D is called multicast group. Below is the diagram on how the one single source making multicast session data to send the same information to the different location or nodes.



It can be shared also the resource of the multicasting to the different nodes in the group. Where, two or more sources of the multicast use one rendezvous point (RP) to distribute the same information to the rest of the group. Below diagram show how it distributes from RP to the rest of the group.



The following questions what may raise are? How do we join the sessions, when it start and terminate? Is there a group member restricted? And so on so for. The answers for those questions is IGMP which is stand for Internet Group Management Protocol.  The IGMP ver. 3 is operating between the hosts and router. And it is operating directly to attach host information to the router. The IGMP sent membership_query message from the router to the rest of the hosts that attached to the router. The host respond to the message and sent IGMP membership report to the router. In this way router can update the host membership information. If the host has been register before and it is not responding the IGMP membership_query message then message in IGMP would be leave_group.


References list:

Thursday 13 December 2012

Adapter to the Nodes


By Raul Bernardino
Introduction:
The host communication path from one source through routers found it on destination host is called communication link. The continuation of the process data packet from network layer passing via link layer are crossing individual links which are ending to end to end communication path. 
Link-layer channels have two functionalities as follows:
  1. Broadcast channel: Typically LANs, Wireless, satellite network, and fiber optics are having broadcasting channels. In broadcasting channel has a multiple hosts are connected in the same communication channel. Therefore it is needed coordinate or flow control of the transmissions whereas to avoid the collision among transmitted frames. 
  2. Point to point. Typically this communication link is using the end to end or point to point path in which establish communication between two routers or using home dial up to communicate with router. The point to point coordination link is insignificant however framing, flow control, and detecting error are important.

Below diagram is the implementation of link layer in every host and in the network interface card or adapter.

How the adapters are communication as follows:
  1. Sender: the sender will be encapsulating data-gram in to the frames and adding check error bits, flow control, and etc.
  2. Receiver: the receiver will be looking for errors, control, etc and extracting the data-gram, forwarded to the upper layer.

The diagram below is show how adapters are communicating.



How do we differentiate link layer and network layer? This two differences can be illustration with “travel agent and transportation”, Kurose, J.F. & Ross, K.W. (2010). For instance trip from Timor-Leste to Canberra. First of all the travel agent plan to have traveller using local taxi to Airport Dili, then from airport Dili to Air port Darwin use Air-north, and from Darwin to Sydney and  Canberra is use Qantas air line. Here the travel agent is similar to routing protocol while transportation system to link from Dili to Canberra is link layer protocol.
Several Link Layer protocols are follows:
  1. Framing: In every link layer is doing a networks layer data-gram encapsulation before it transmit to other link.
  2. Link access: the MAC protocol will determine the rules for transmitting the frame in the link.
  3. Reliable delivery: this to guarantee the data-gram movement from sender to receiver is able to delivery without any error.
  4. Flow control: Because each node have a limitation capacity of buffering therefore it is important to have a flow control in order to control data flow from sender to receiver without any lost.
  5. Error detection: In the link layer of the receiving node can determine error by deciding the bit in frame is zero or one from the sender. It is no need to forward data gram to upper layer if there is an error.
  6. Error correction: This is similar to error detection however the correction at the receiver end may occur on the packet header rather than on entire packets frame
  7. Half duplex and full duplex: On the full duplex mode every nodes can transmit frame at the same time while the half duplex and can only transmit or receive not both at the same time.

There are similar services from link layer compare to the transport layer. For instance flow control. Both have flow control however there is a differences between link layer and transport layer. On the transport layer the flow control is on end to end basis while flow control on the link layer is on nodes to nodes in one single link.
How about moving the adaptor functionally to the software’s in the CPUs? It is possible with today software applications. For instance firewall, gateways or router, proxy, and etc can be using software. There are the advantage and disadvantage as follows:
Advantage:
  • No additional cost for hardware
  • One single PC can act several services
  • VMware infrastructure
  • Optimizing resources

Disadvantage:
  • Decreasing a performance on the PC because too much loading on the processor and memory or consuming large amount CPU time
  • The reliability may becoming an extra cost
  • Standard 10/100 Ethernet interfaces do not enough throughput for iSCSI; the iSCSI need gigabits Ethernest


References list:
  1. Kurose, J.F. & Ross, K.W. (2010) Computer Networking: A Top-Down Approach. 5th ed. Boston: Addison Wesley
  2. University of Liverpool/Laureate Online Education (2011) Lecture notes from Computer Networking Module Seminar 5 [Online]. Available from: University of Liverpool/Laureate Online Education VLE (Accessed: 2 September 2011)
  3.  WMWARE Infrastructure 3: iSCSI Design Considerations and Deployment Guide, [Online]. Available from: http://www.vmware.com/files/pdf/iSCSI_design_deploy.pdf (Accessed: 2 September 2011)


TCP Friendly


By Raul Bernardino
Introduction:
It is becoming a general concerns about the using of the bandwidth over the networks or internet. For instance the networks users are making a lot of video or audio streaming over the internet, then the UDP will be occupied or used almost all of the network resources and it comes to the networks congestions.
In order to overcome the network congestion, the TCP-Friendly becomes to play the important roles on how to manage the congestions in the network.
The TCP segment structure as in below picture:

How TCP Works as in the below picture:


 The sequence numbers (seq. #s) are the byte stream of the 1st segment of the data (packet).
The ACKs are the accumulative of ACK and the seq # of the next bytes that expected from the other end.
What is the congestion? The congestion is an overflow network where too much data and faster transmission from the sender and network can’t handle. The consequence is packet lost and long delay of the transferred.
The TCP congestion control on the bandwidth probing: it is increasing the transmission rate on the receipt ACK until the loss packets occur then it is slowing down the transmission rate. As it shows below picture:


There are several studies and researches out there to overcome the network congestions. For instance:
a)     Sisalem, D., Emanuel,F. and Schulzrinne, H. used “Direct Adjustment Algorithm (DAA) to create and adaptive transmission rate in network congestion especially for multimedia application. The control is relaying on the RTP
b)    Padhye, J., Kurose,J., Towsley, D.,  and Koodli, R. used congestion control algorithm for the Unicast  trafficking by revising the TCP-friendly equation version. In their found, the algorithm is focusing on the roundtrip time. If the packets are lost in between the roundtrip time then the send set the rate otherwise the sender double up the initial rates to send
c)     Tan,W. and  Zhakor, A. (Oct. 1999) from University of California, Berkeley where allow users to subscribe and it is driven by hierarchical FEC data, the formula approach will we determine the availability of the bandwidth
d)    Floyd,S., Handley, M.,  Padhye, J., and Widmer, J. (Mar. 2000) proposed congestion control relaying on the unicast traffic.  “This mechanism provides responsive to continual congestion, to avoid unnecessary fluctuations, to avoid the opening of needless noise, and the robustness over a wide range of timescales”


Conclusion: All researchers are willing to optimize the bandwidth with several equations based on the networks congestion control. In the future, it is still not has a clear idea on how it will be developing and managing the internet architecture and internet bandwidth. If the cost of the bandwidths are cheap then every agencies will be creasing the bandwidth which is bigger than their own demands without have to manage the bandwidth.
References list:
1.     Floyd S, (2008), TCP Friendly Rate Control (TFRC), University College London, [online]. Available from: http://www.icir.org/floyd/papers/rfc5348.pdf, (Accessed date: August 20, 2011)
2.     Wang Q., et all (n.d.) TCP-Friendly Congestion Control Schemes in the Internet,  [Online]. Available from:http://www.sics.se/~runtong/11.pdf (Accessed date: August 20, 2011)
3.     Hierarchical FEC (HFEC), [online]. Available from: http://www-video.eecs.berkeley.edu/~dtan/icip99slide.pdf (Accessed date: August 20, 2011)
4.     Equation-Based Congestion Control for Unicast Applications: The Key TFRC Documents, [online]. Available from: http://www.icir.org/tfrc/ (Accessed date: August 20, 2011)