Network Management


Task 1


A. Network Fault and Network Management


            Network management pertains to the different activities, methods, procedures as well as tools that are used in order to operate, administer, maintain and provision the overall networked system (Clemm 2006).


1. Playback Attacks


            Playback attack can be defined as an attack in which a thief observes, investigates and consequently replays a formerly transmitted message (Jurgen 1999, p.12). It was called playback attack because the attackers records or keeps an interaction or communication and plays it back afterwards (Zwicky & Cooper 2000, p. 321). Therefore having an authentication is not enough to protect the network. It must use an authentication that will disable the reusability of every interaction or transmitted message. Nonces are numbers or bit or string that uses once in a lifetime by protocols. It can help to guard the network against the playback attack.


            Playback attack can be prevented with the use of SNMPv3, where in it adopts the approach of the nonces. It requires that the sender include a value in each message that will be based on the counter in the receiver.


2. Unauthorized Access


            Unauthorized access to any network is a big no. This is due to the fact that there are many information and data that are confidential for the operation of any company or organization. Unauthorized access to a data or information can cause alteration or even worse loss of data. SNMP protocol has a solution that can help to block unauthorized users. With the use of the privilege level or access control list or ACL, it can help the network manager to detect unauthorized access and eventually prevent the access in the network.


            SNMPv3 offers its view-based access control that handles the network management information that can be queried or set by a given level of user.


Task 2


A. AES Key


1. Assumption


10,000 up-to-date PC’s with:


ü  Intel Pentium 4, 2.6 GHz processor


ü  Without hyper-threading


ü  1 key = 10 cycle


Key size: 128 bits


2. Solution


            1 Gigahertz (GHz) is equal to 1,000,000,000 cycles per second (Ganssle & Barr 2003, p. 115). Advanced Encryption Standard (AES) can support 128 bits, 192 bits, 256 bits key sizes.


            Using the brute-force approach, can give a result of 3.4 x 1038.


            Each PC can process 2,600,000,000 cycles per second. It will be divided into 10 because each key is equivalent to 10 cycles


Each PC: 2,600,000,000/10 = 260,000,000


10,000 PC’s: 260,000,000*10,000=2.6 x 1012


To derived the AES key: (3.4 x 1038) / (2.6 x 1012) = 1.3 x 1026 seconds


                                                                                    = 3.6 x 1022 hours


B. Advantage and Disadvantage of Public and Private Key Cryptography


            Cryptography has its long and interesting history. It was first used by the Egyptians 4000 years ago and was used during the world wars. It was also used as a tool that protects the national secrets as well as strategies (Menezes & Van Oorschot 1997, p.1). Cryptology is concerned with the conceptualization, pertains to the activities that define, conceptualize and construct the computing system that focuses on the security issues (Goldreich 2001, p. i).


  • Private Key Cryptography – the only type of cryptography that was available until the mid-1970s, are those cryptography that uses the private key. In the said type of cryptography, the key that is used for encryption is the same key that is used for the process of decryption (Hirschfeld 1997, p. 262). It also called as the symmetric cipher because the sender and the receiver must know the key in order for them to communicate (p. 263).
  • Advantages


  •                                           i.    Speed – the major advantage or edge of the private key than that of the private key cryptography is that private key is faster than the other (Hirschfeld 1997, p. 265). The reason behind this is that the keys that are use for private key are short (Menezes & Van Oorschot 1997, p. 31).


                                            ii.    Security – private key cryptology offers a high level of security for as long as the key strings that are used are truly secret (Nielsen & Chuang 2000, p. 583).


                                           iii.    Can be composed to produce stronger cipher – simple formations that are simple to analyze, but on their own weak can be applied in order to construct strong product ciphers (Menezes & Van Oorschot 1997, p. 31).


  • Disadvantages

  •                                           i.    Need for secured channel – this is one of the disadvantage of the private key, it needs a secured channel in order for the two parties to communicates by agreeing and transporting their keys (Hirschfeld  1997, p. 264).


                                            ii.    Bulky Number of Keys – each people that communicates using the private key cryptography needs their own unique key. It will not be applicable if there will be many or large number of user because it can add up to the heavy number of keys (Hirschfeld 1997, p. 264).  


                                           iii.    Limited – due to the fact that the private key cryptography is more sensitive than the public key, and it has a great need for secured channel in order for the sender and receiver to communicates, private key cryptography cannot execute authenticity in an open network (Hirschfeld 1997, p. 264).


     


  • Common Cipher

  •                                           i.    International Data Encryption Algorithm – invented by James Massey and Xuija Lai from Zurich, Switzerland. It uses 128 bit key and considered as the building block of PGP, infra (Hirschfeld 1997, p. 263).


  • Public Key Cryptography – it was invented in 1976 and has successfully overcome the three disadvantage of the private key cryptography. The main difference between the two is that in public key cryptography, there are two keys that are used: a public key that is disseminated publicly and the related private key is only knew and not intentionally shared by the user (Hirschfeld 1997, p. 264).
  • Advantages


  •                                           i.    Security – the administration of keys on a given network requires the attendance of only a functionally trusted TTP as opposed to an unconditionally trusted TTP. This is depended on the mode or usage, the TTP might only be required in an off-line manner, as different to in real time (Menezes & Van Oorschot 1997, p. 31).


                                            ii.    Prevents accumulation of number of keys – this is the answer of the public key about the weakness of the private key that can cause accumulation of bulky number of keys in the network. The pair of key that was used will stay unchanged for significant periods of time (Menezes & Van Oorschot 1997, p. 31).


                                           iii.    Efficient digital signature mechanism – the key that are used to depict the public verification function is typically much lesser than the private key equivalent (Menezes & Van Oorschot 1997, p. 31).


  • Disadvantages

  •                                           i.    Slow – this is the most significant weakness of the public key cryptography. It is slower compared to the private key.


                                            ii.    Larger Key Size – the key sizes that are used in the public key is larger than the private key as well as the size of the public key signatures than the tags that provides data origin authentication from the private key techniques (Menezes & Van Oorschot 1997, p.32).


                                           iii.    History – public key is far behind the private key cryptography when in comes to an extensive history due to the fact that the private key was discovered during the mid 1970s (Menezes & Van Oorschot 1997, p. 32).


  • Common Cipher

  •                                           i.    RSA – named after the mathematician developers Ronald Rivest, Adi Shamir and Leonardo Adleman (Kessler, 1998). Used by most of the software products that are used for the key exchange encryption as well as the digital signature for the small block of data. It also uses the variable size of encryption block and the variable of size key (Kessler, 1998).


    B. Digital Signature


  • Creation, Transmission and Verification of Digital Signature – digital signature are used in message-by-message authentication. It is a block of bits that are attached to each outgoing message to confirm or verity the sender’s identity. It helps out to minimize the attacker-in-the-middle threats as well as give nonrepudiation on a message-by-message basis (Bidgoli 2004, p. 527).

  • Creation of Digital Signature – the process of using digital signature starts when the sender creates a message to be sent or the original plaintext. When the sender sends a message, he or she will put his or her original long message to the hash function in order to create the message digest. The sender will encrypt the message using his or her private key (Kurose & Ross 2005, p. 681). Hashing is the mathematical process that is applied to a string of bits of any length in order to produce the result or the hash (Bidgoli 2004, p. 527). After that, the encrypted or also known as the digitally signed message digest will be attached to the original message in plaintext form and will be sent to the receiver of the message (Kurose & Ross 2005, p. 681).

  •  


     


     


    Figure 1 Digital Signature Creation



     


     


     


     


     


     


     


     


     


     


     



     


    Adapted from (Bidgoli 2004, p. 527)


  • Transmission and Confidentiality (Bidgoli 2004, p. 527) – after the creation of the digital signature, the sender will create a composite message by concatenating the bits of the digital signature to the bits of the original plain text message or the composite message. The composite message will be encrypted using the private key that he/she shares with the receiver that helps confidentiality. Then the sender will transmit the composite message that was encrypted with the private key to the receiver, the said message cannot be read en route by an attacker-in-the-middle. The receiver will then decrypts the transmitted message using the symmetric key that was shared with the sender that can help to restore the composite message (Bidgoli 2004, p. 527).

  • Verification – The message can be decrypted by the receiver by using the sender’s public key that can eventually help to recover the message digest. If the two messages are the same, the receiver can be sure that the sender is the author of the message and that shows integrity of the message (Kurose & Ross 2005, p. 681).

  • Figure 2 Digital Signature Transmission



     


     


     


     


     


     



     


    Adapted from (Bidgoli 2004, p. 527)


     


     


    Figure 3 Digital Signature Verification



     


     


     


     


     


     


     


     


     


     


     


     


     


     



     


    Adapted from (Bidgoli 2004, p. 527)


    C. Message Authentication Code or MAC – uses secret key in order to generate a small block of data. The said technique presumes that two corresponding parties (example A and B), allocate a common secret key (KAB). When A has a message (M) that was send to B, it will compute the MAC as a function of the message to the intended recipient. The recipient will perform the same computation on the received message using the matching secret key (MACM=F (KAB M)). The message and the code will now be broadcasted to the projected recipient. After receiving the message together with the code, the receiver will compute the received message using the same secret key in order to produce new MAC. After that, the received code will be compared to the calculated one. If they are the same then the user can make sure that message has not been altered or change (Stallings 2007, p. 715).


    Figure 4 MAC Authentication



     


     


     


     


     


     


     


     


     


     


     


     



     


    Adapted from (Stallings 2007, p. 715)


    D. Digital Signature vs. MAC


    Table 1 Difference between Digital Signature and MAC


     


    Digital Signature


    MAC


    1. Encryption


    Public Key Cryptography


    Private Key Cryptography


    2. Authentication


    All parties to conversation


    Individual


    3. Size of Signature


    Long and not very quick to verify (Lu Ma & Tsai 2006, p. 21)


    Short (Lu Ma & Tsai 2006, p. 21)


    4. Document


    Transferable


    Not transferable


    5. Third parties


    Required (nonrepudiation)


    Not required


               


    Both digital signature and MAC are used in order to provide and ensure security and authenticity of data or message that are transferred in the network. There are five factors where they differ from each other. The encryption part is the most significant difference of the two authentication techniques. Digital signature uses the public key cryptography while MAC uses the opposite, which is the private key cryptography.


                Due to the fact that the digital signature allows transfer of documents, it can be used for sharing of receipts and other important documents while the MAC is the other way around.


                Another thing is that the setting of the message authentication does not required a third party to verify the validity of the authentication tags that was produced by the chosen users, this is opposite from the setting of the digital signature where in it entails third parties in order to validate the authority of the signature that was produced by other users (Goldreich 2001, p. 5).


    Task 3 TCP Congestion Control Algorithm.


                Transmission control protocol or TCP is a protocol that is use to handle the transport layer in the Internet protocol or IP that provides a reliable connection-oriented transport of data with the definite and assured delivery of data and information (Clark 2003, p. 283). Due to the usage of the said protocol and the wide use of the Internet, congestion is inevitable. Congestion occurs when there is too much traffic in the network (Disanayake & Wickramage n.d.).


  • TCP Congestion Control Algorithm Elements
  • TCP-CC Tahoe – was developed by Van Jacobson and Karels in October 1986 because of the series of congestion and collapses of the Internet in their time (cited in Mo & La 1998, p. 1). It uses a slow-start, congestion avoidance as well as the fast retransmit mechanism (p. 61). TCP Tahoe categorically cuts its congestion window to one maximum segment size, and then enters the slow-start phase after encountering any type of loss event (Kurose & Ross 2005, p. 268).  


  • Slow start will be trailed by the fast retransmission. It is functioning well at a single loss in the congestion window but it go after the congestion by rising to slow start (Janevski 2003, p. 61). During the time of the slow start, the congestion window will be increasing exponentially by one for each acknowledgement that was received. It will only stop if it attain or reach the slow-start threshold. The congestion avoidance window will increase linearly by one for each round trip time (RTT) (Guizano 2004, p. 249).


    The main significance of the said algorithm is that it focuses on the conservation of packets by operating the clock outgoing packets or taking out the packet from the wire by the receiver of the packet. The principle of the additive increase multiplicative was used in the said algorithm for the congestion avoidance feature. The algorithm believes that packet loss is a sign of congestion. Due to the said reason, it can save half of the current window that can help to be a threshold and eventually set the CWD to one and starts the slow start again until it reaches the proper threshold value. After that, it will increment in linear manner until it encounter another packet loss. The result is that it can increase the window in slow manner while slowly approaching the capacity of the bandwidth (University of California, Berkeley n.d., p. 1).


  • TCP-CC Reno – introduces the fast recovery algorithm that set that congestion window to the half of the current window and raises the congestion avoidance from the half of the congestion window advanced the performances of the TCP stream at a single loss per window (Shafazand & Tjoa, 2002, p. 168). The problem will occurs when the multiple packets are dropped from a window of data (Janevski 2003, p. 61). It provokes losses to estimate the accessibility of the bandwidth in the network. It was first implemented and used in 1990 (Mo & La 1998, p. 2). It cancels the slow-start phase after encountering three consecutive acknowledgements. Regardless the packet lost, the appearance of the three duplicate acknowledgements specifies or show that there are segments that have been already received at the sender. It enables the network to transport at least some of the segments of a slow-start phase after three duplicate acknowledgements (Kurose & Ross 2005, pp. 256 – 269).

  • It will continue to increase the size of the window by one in every round trip while there are no packet looses in the network. It will automatically reduce the size of the window by one half of the current size of the window if it encounters packet loss or the process of additive increase and multiplicative decrease (Mo & La 1998, p. 2).


  • TCP-CC Vegas – uses more sophisticated and updated bandwidth estimation techniques by using the difference between the projected rates of the actual flow and the estimated bandwidth that are available in the network. When the network is congested, it will change the actual flow rate, in order to get close to the expected flow rate. The rate of the actual flow will eventually be smaller than the expected flow rate (Mo & La 1998, p. 2). It uses a proactive approach to control the different congestion problem by focusing on the detection of the congestion before it can happen (Khang & goto 2004, p. 322).

  • It attempts to avoid different congestion while maintaining or balancing the good throughput by the process of detection of different congestion in the routers between the source and the destination before the occurrence of the packet loss. Eventually it will lower down the rate in linear manner when it detects a coming up packet loss while observing the roundtrip time (Kurose & Ross 2005, p. 269 – 270).


  • Difference Between the TCP Congestion Control Algorithm Elements

  • The study that was conducted by Kenji Kurata, Go Hasegawa and Masayuki Murata of Osaka University shows the comparison between the TCP Reno and TCP Vegas. Using their network model (see appendix A), they have done series of test with regards to the reaction of the two algorithm after detecting packet loss (see appendix B). TCP Vegas control the window size with accordance to the observed RTTs of sending packets, this is to keep the number of the queued packets in the router buffer. As the RTT becomes large, the TCP Vegas continue to decrease the size of the window. On the other hand, the TCP Reno continues to increase the size of the window in spite of the increased of RTT (Kurata & Hasegawa n.d).


     


     


     


    Table 2 Difference between the Three TCP-CC Algorithms


     


    Measure


    TCP-CC Tahoe


    TCP-CC Reno


    TCP-CC Vegas


    RTT Variance Estimation





    Exponential RTO Back off





    Karn’s Algorithm





    Slow Start





    Dynamic Window Sizing on Congestion





    Fast Retransmit


     




    Fast Recovery


     


     



    Modified Fast Recovery


     


     



    Source: (Stallings, 2006, p. 683)


    The main difference between the TCP Tahoe and the TCP Reno is that TCP Reno uses the fast recovery mechanism while the other does not. TCP Tahoe can also handle single drop in a flight efficiently but cannot handle packet drops in a single flight very well. TCP Tahoe cannot memorize outstanding data when it switches to slow start, on the other hand, TCP Reno can handle such event (Guizano 2004, p. 253). On the other hand, TCP Vegas has its edge against the first mentioned TCP algorithm; it can switch to fast retransmit earlier that can help to improve its performance. It can also help to combat losses that was caused by sporadic wireless channel error more efficiently because it can reduce congestion up to ¾, unlike the first two mentioned algorithm that can only handle ½ (Guizano 2004, p. 253).


     


    References


     


    A Comparative Analysis of TCP Tahoe, Reno, New Reno, SACK and Vegas, University of California, Berkeley, viewed 14 November 2007, < http://inst.eecs.berkeley.edu/~ee122/fa05/projects/Project2/SACKRENEVEGAS.pdf >


     


    Bidgoli, H 2004, The Internet Encyclopedia, John Wiley and Sons


     


    Clark, M 2003, Data Networks, IP and the Internet: Protocols, Design and Operation, John Wiley and Sons


     


    Clemm, A 2006, Network Management Fundamentals, Cisco Press


     


    Disanayake, C & Wickramage, N, Congestion Control Algorithms in TCP, Department of Computer Science & Engineering Faculty of Engineering University of Moratuwa, viewed 24 January 2008, < http://www.cse.mrt.ac.lk/research/NRG/slides/20030530.pps #256,1,Congestion Control Algorithms in TCP>


     


    Focardi, R (ed.) & Gorrieri, R (ed.) 2004, Foundation of Security Analysis and Design II: FOSAD 2001/2002 Tutorial Lectures, Springer


     


    Ganssle, J & Barr, M 2003, Embedded System Dictionary, Focal Press


     


    Goldreich, O 2001, Foundation of Cryptography, Cambridge University Press


     


    Guizani, M 2004, Wireless Communications Systems and Networks, Springer


     


    Hirschfeld, R (ed.)1997, Financial Cryptography: First International Conference,


    FC ’97: Anguilla, British West Indies, February 1997 Proceedings, Springer


     


    Kurata, K, Hasegawa, G & Murata, M, Fairness Comparison Between TCP Reno and TCP Vegas for Future Deployment of TCP Vegas, draft, University of Osaka, viewed 24 January 2008, <http://www.isoc.org/inet2000/cdpr oceedings/2d/ 2d_2. htm>


     


    Janevski, T 2003, Traffic Analysis and Design of Wireless IP Networks, Wireless Communication Systems


     


    Jurger, R 1999, Automotive Electronics Handbook, McGraw-Hill Professional


     


    Kahng, H (ed.) & Goto, S (ed.) 2004, Information networking: Networking for Broadband and Mobile Networks, International Conference, ICOIN 2004, Busan, Korea, February 2004, Revised Selected Papers, Springer


    Kessler, G 1998, An Overview of Cryptography, Garry Kessler, viewed 16 November 2007, < http://www.garykessler.net/library/crypto.html >


     


    Kurose, J & Ross, K 2005, Computer Networking: A Top-Down Approach Featuring the Internet, Addison-Wesley


     


    Lu Ma, J & Tsai, J. P. 2006, Security Modeling and Analysis of Mobile Agent System, Imperial College Press


     


    Menezes, A, Van Oorschot, P & Vanstone, S 1997, Handbook of Applied Cryptography, CRC Press


     


    Mo, J, La, R, Anantharam, V & Walrand, J 1998, Analysis Comparison of TCP Reno and Vegas, Department of Electrical Engineering and Computer Sciences, University of California at Berkeley, viewed 14 November 2007, <http://netlab.caltech.edu/FAST/references/Mo_comparisonwithTCPReno.pdf>


     


    Nielsen, M & Chuang I 2000, Quantum Computation and Quantum Information, Cambridge University Press


     


    Peltier, T & Peltier J 2006, Complete Guide to CISM Certification, CRC Press


     


    Shafazand, H & Tjoa, M 2002, EurAsia-ICT 2002: First Eurasian Conforence, Shiraz, Iran, October 2002, Springer


     


    Stallings, W 2007, Data and Computer Communications, Pearson Prentice Hall


     


    Zwicky, E, Cooper, S & Chapman, B 2000, Building Internet Firewalls, O’Reilly


     


     


     


     


     


     


     


     


     


     


     


     


     


     


     


     



    Credit:ivythesis.typepad.com



    0 comments:

    Post a Comment

     
    Top