Computer Networks Question and Answers
( Suggestion : keep refreshing the page for updated content & Search Questions Using Find )
Q1.Packet switch receives a packet and determines the outbound link to which the packet should be forwarded. When the packet arrives, one other packet is halfway done being transmitted on this outbound link and five other packets are waiting to be transmitted. Packets are transmitted in order of arrival. Suppose all packets are 2000 bytes and the link rate is 5 Mbps. What is the queuing delay for the packet?
Ans-To calculate the queuing delay, you can use the following formula:
where:
- is the number of packets ahead of the given packet,
- is the packet length, and
- is the transmission rate.
In this case:
- (one packet halfway done plus five waiting packets),
- bytes, and
- Mbps.
Substitute these values into the formula to find the queuing delay. Make sure to convert the transmission rate to bits per second (bps) if necessary.
Now substitute these values into the formula:
Queuing Delay=6×2000 bytes/5×106 bps
Calculate the queuing delay we need to convert bytes to bits (1 byte = 8 bits) in the calculation.
Q2. Suppose users share a 5 Mbps link. Also suppose each user transmits continuously at 512 Kbps when transmitting, but each user transmits only 20 percent of the time
a) When circuit switching is used, how many users can be supported?
b) Suppose now there are five users. Find the probability that at any given time, three users are transmitting simultaneously. Find the fraction of time during which the queue grows.
Ans-
a) In circuit switching, each user occupies the entire link capacity even when not actively transmitting. So, the number of users that can be supported on a 5 Mbps link is calculated by dividing the link capacity by the capacity required for each user:
Number of users=Total link capacity/Capacity per user
Number of users=5 Mbps/0.512 Mbps
Number of users≈9.77
Since we can’t have a fraction of a user, the maximum number of users supported is 9.
b) The probability that three out of five users are transmitting simultaneously can be calculated using the binomial probability formula:
where:
- is the number of trials (users),
- is the number of successes (transmitting users),
- is the probability of success (probability of a user transmitting at any given time).
In this case:
So, the probability that at any given time three users are transmitting simultaneously is approximately 0.0512.
To find the fraction of time during which the queue grows, we need to consider the probability of having more than one user transmitting at any given time. This can be calculated by summing the probabilities of having 2, 3, 4, or 5 users transmitting simultaneously:
You can use the same binomial probability formula with different values of for each term in the sum.
Q3. Suppose a process in Host C has a UDP socket with port number 6789. Suppose both Host A and Host B each send a UDP segment to Host C with destination port number 6789. Will both of these segments be directed to the same socket at Host C? If so, how will the process at Host C know that these two segments originated from two different hosts ?
Ans- Yes, both UDP segments with destination port number 6789 from Host A and Host B will be directed to the same UDP socket at Host C. The port number is used to demultiplex incoming segments to the appropriate socket.
To distinguish between segments originating from different hosts, the UDP header contains source port numbers. The combination of source IP address, source port number, destination IP address, and destination port number uniquely identifies a UDP communication flow. Therefore, the process at Host C can differentiate between the segments based on the source port numbers and IP addresses in the UDP headers.
Q4. If the sender has precise knowledge of a consistent round trip delay between itself and the receiver, does the protocol rdt 3.0 still require a time considering the possibility of packet loss? Please provide an explanation
Ans- Yes, even if there is precise knowledge of round trip delay, the protocol RDT 3.0 would still need to consider the possibility of packet loss. RDT (Reliable Data Transfer) protocols, including RDT 3.0, are designed to ensure reliable and ordered delivery of data despite potential issues like packet loss, duplication, and reordering.
Even with known round trip delay, packet loss can occur due to network congestion, hardware failures, or other issues. RDT 3.0 would likely incorporate mechanisms such as acknowledgments, timeouts, and retransmissions to handle situations where packets are lost during transmission. This ensures that the sender can detect and recover from packet loss, maintaining the reliability of the communication.
Q5. One of the popular OTT service providers want to cater many a number of webservers in their webserver cluster with modularization, so they had planned to use a multithreaded web server(mws) to minimize the latency mws takes 500 usec to accept a request and check the cache. Half the time the file is found in the cache and returned immediately. The other half of the time the module has to block for 9 msec while its disk request is queued and processed.
a) What is the CPU utilization of the webserver before modularization?
b) How many modules should the server have to keep the CPU busy all the time (assuming the disk is not a bottleneck)?
Ans- a) To calculate CPU utilization before modularization, we need to consider the time the CPU spends on useful work versus the total time. In this case, the CPU spends 50% of the time checking the cache (500 usec) and the other 50% blocking for disk requests (9 msec).
CPU utilization=Time on useful work/Total time
Total time=Time to check cache + Time to block for disk
Total time=0.5×500 usec+0.5×9 msec
Total time=0.0005 sec+0.0045 sec
Total time=0.005 sec
CPU utilization=0.0005 sec/0.005 sec
CPU utilization=0.1 or 10%
b) To keep the CPU busy all the time, we need to determine the number of modules needed to overlap the blocking time with the cache-checking time. Since the cache check and disk block times are equal, one module can overlap with the blocking time of another.
Total time per module=Time to check cache + Time to block for disk
Total time per module=500 usec+9 msec
Total time per module=0.0005 sec+0.009 sec
Number of modules needed=Total time/Time per module
Number of modules needed=0.005 sec/0.0095 sec
Number of modules needed≈5.26
Therefore, the server should have at least 6 modules to keep the CPU busy all the time, assuming the disk is not a bottleneck.
Q6. Suppose Host A sends two TCP segments back to back to Host B over a TCP connection. The first segment has sequence number 110; the second has sequence number 150.
a. How much data is in the first segment?
b. Suppose that the first segment is lost but the second segment arrives at B. In the acknowledgment that Host B sends to Host A, what will be the acknowledgment number?
Ans- a. To determine the amount of data in the first segment, you subtract the sequence number of the first byte in the next segment from the sequence number of the first byte in the current segment. So, 150−110=40. Therefore, there are 40 bytes of data in the first segment.
b. If the first segment is lost, but the second segment arrives, Host B will acknowledge the next expected sequence number. In this case, since the first segment had a sequence number of 110 and had 40 bytes of data, the acknowledgment number would be 110+40=150.
Q7. Suppose within your Web browser you click on a link to obtain a Web page. The IP address for the associated URL is not cached in your local host, so a DNS lookup is necessary to obtain the IP address. Suppose that 5 DNS servers are visited before your host receives the IP address from DNS; the successive visits incur an RTT of RTT1, RTT5. Further suppose that the Web page associated with the link contains five objects, consisting of a small amount of HTML text. Let RTTO denote the RTT between the local host and the server containing the object. Assuming zero transmission time of the object, how much time elapses with Non-persistent HTTP without parallel TCP connections and persistent HTTP ?
Ans- In both Non-persistent HTTP without parallel TCP connections and persistent HTTP, multiple round-trip times (RTTs) contribute to the overall time to fetch the web page and its objects.
- Non-persistent HTTP without parallel TCP connections:
- One RTT for DNS lookup.
- One RTT to establish a TCP connection.
- Five RTTs to request and receive the five objects (assuming no parallel connections).
- The total time is .
- Persistent HTTP:
- One RTT for DNS lookup.
- One RTT to establish a TCP connection (if not already established).
- One RTT to request and receive the web page.
- Additional one RTT for each subsequent object due to the persistence of the connection.
- The total time is .
In persistent HTTP, the connection is reused for multiple objects, reducing the number of connection establishment RTTs. Therefore, persistent HTTP is generally more efficient in terms of latency compared to non-persistent HTTP without parallel connections.
For More Updates Join Our Channels :