Skip to content
🤔prompts chat🧠
🔍
question:NewReno vs cubic? pros and cons?

answer:NewReno and CUBIC are both TCP congestion control algorithms, but they operate differently and have different performance characteristics. Here’s a comparison between the two: **NewReno** Pros: - Good for networks with low to moderate bandwidth and latency. - Works well with small-scale or legacy systems where packet loss is relatively common and indicates network congestion. - It's a conservative algorithm, and its slow-start and congestion avoidance algorithms are quite robust in various network conditions. - NewReno reacts quickly to packet losses (considered a sign of congestion) by reducing its congestion window (cwnd) and gradually increasing it as more acknowledgments are received. Cons: - Not optimized for high-speed, long-distance networks with large bandwidth-delay products (BDPs). NewReno tends to underutilize available bandwidth in such networks due to its conservative approach to increasing the congestion window. - NewReno can result in reduced throughput and longer times to recover from packet loss in high-bandwidth or high-latency networks compared to more aggressive algorithms like CUBIC. - Its slow growth of the congestion window can lead to lower overall throughput in environments with temporary or random packet loss unrelated to actual congestion. **CUBIC** Pros: - CUBIC is specifically designed for high-speed, long-distance networks (high BDP environments) and scales the congestion window more aggressively to achieve larger window sizes without being overly sensitive to individual packet losses. - It improves upon NewReno's performance in networks where loss doesn't necessarily signal congestion (such as networks with high error rates or wireless networks). - CUBIC has a concave window growth function that allows for smoother increments in cwnd, which can be more fair and stable when sharing network capacity with other flows. - CUBIC adjusts its rate of cwnd growth based not on time since the last loss event (as in TCP Reno/NewReno) but rather on the cwnd size, making it less aggressive for smaller cwnds and more aggressive for larger cwnds, which can be beneficial for network performance during varying conditions. Cons: - CUBIC may not always respond quickly enough to changes in available bandwidth or sudden increases in network latency, potentially leading to temporary periods of underuse or overuse of network resources. - It may not be as fair to flows with traditional algorithms like Reno or NewReno in mixed environments because of its more aggressive nature when the cwnd is large. - In networks with low BDP or where packet loss is a reliable congestion signal, CUBIC's aggressiveness may lead to more pronounced oscillations in the sending rate, which can affect the stability of the network. Ultimately, the best choice between NewReno and CUBIC depends on the characteristics of the network in which they are used. For modern, high-speed Internet backbone networks, CUBIC is often the better choice and is the default on many Linux systems. However, in networks where bandwidth is limited, latency is low, or packet loss is a reliable indicator of congestion, NewReno or similar algorithms may perform better. As always, real-world testing in the target environment is essential to determining the best algorithm for any given scenario.

question:BBR vs CUBIC? Pros and cons?

answer:BBR (Bottleneck Bandwidth and Round-trip propagation time) and CUBIC are both TCP congestion control algorithms, but they use fundamentally different approaches to congestion control. Each has its own set of advantages and disadvantages. **BBR** Pros: - **Responsiveness to Actual Bandwidth**: BBR estimates the bottleneck bandwidth and round-trip propagation time to determine the optimal data sending rate, allowing it to effectively utilize the available bandwidth without being dependent on packet loss as a congestion signal. - **Minimized Latency**: BBR aims to reduce bufferbloat by sending data at a rate that is calculated not to exceed the network's actual bandwidth capacity, potentially decreasing latency and providing consistent throughput. - **Performance in High-BDP Networks**: BBR can perform well in high bandwidth-delay product (BDP) environments, often outperforming CUBIC and other loss-based algorithms in such contexts. - **Multiple RTTs for Bandwidth Estimation**: BBR takes multiple rounds of RTT measurements to update its model of the network path, leading to potentially more accurate bandwidth provisioning. Cons: - **Fairness Issues**: BBR might not always coexist fairly with loss-based algorithms like CUBIC, particularly in shared networks; it can consume a greater share of the bandwidth. - **Performance Under Packet Loss**: If packet loss occurs not because of congestion but due to the poor quality of the link, BBR might underperform by continuously trying to probe for higher bandwidths. - **Oscillations**: BBR can lead to throughput oscillations as the algorithm continuously probes for more bandwidth or backs off in case of potential congestion. **CUBIC** Pros: - **Scalability**: CUBIC scales the congestion window using a cubic function, making it very effective on networks with high bandwidth and high latency (high BDP). - **Stability and Fairness**: CUBIC is designed to be fair to multiple flows sharing the same network path and generally seeks a balance that avoids starving other connections of bandwidth. - **Fallback to loss-based congestion control**: CUBIC falls back to loss-based congestion control when it detects packet drops or ECN (Explicit Congestion Notification) signals, which makes it perform well in a mixed environment with various congestion control algorithms. - **Widely Deployed**: As the default congestion control algorithm in many Linux distributions, CUBIC is well-studied, tested, and continuously updated to improve performance. Cons: - **Reliance on Packet Loss**: CUBIC inherently uses packet loss as an indicator of congestion, which can lead to suboptimal performance in networks where loss occurs independently of congestion (e.g., wireless networks). - **Bandwidth Utilization**: It might not utilize the path capacity as efficiently as BBR in certain scenarios, especially when competing with BBR flows. - **Bufferbloat**: CUBIC's aggressiveness in increasing the congestion window can lead to bufferbloat, contributing to higher latency. Ultimately, the choice between BBR and CUBIC depends on the specific network characteristics and the type of traffic. BBR may offer lower latency and better bandwidth utilization in high-speed networks but may require careful monitoring for fairness and performance stability. In contrast, CUBIC is a more traditional and well-established choice, especially in environments where throughput and fairness across different traffic flows are top priorities. Real-world testing is essential for selecting between BBR and CUBIC, as network performance can vary widely based on the mix of traffic, available infrastructure, and current network conditions.

question:What algorithm is the default on Windows, MacOS, Linux, OpenBSD, NetBSD and FreeBSD?

answer:Each operating system may use different default TCP congestion control algorithms, and these can change with software updates. Generally, the defaults as of my knowledge cutoff in 2023 are: **Windows**: - Windows typically uses a proprietary congestion control algorithm. Until Windows 10, it used **Compound TCP (CTCP)** as the default algorithm, which aims to achieve high performance on networks with a high bandwidth-delay product while being fair to other traffic. - More recent versions have introduced **CUBIC** as the default, with options to select alternative algorithms depending on the version and network configuration. **macOS**: - macOS has traditionally used the **NewReno** congestion control algorithm as the default. - It's possible for users to select alternative algorithms (like CUBIC) through the use of system parameters or networking APIs, depending on the macOS version and developer options. **Linux**: - Most modern Linux distributions use **CUBIC** as the default TCP congestion control algorithm. - Linux kernels typically support several other congestion control algorithms as well, including BBR, NewReno, and Vegas, among others, which can be selected by changing sysctl settings. **OpenBSD**: - OpenBSD has previously used variants of **NewReno**; newer versions of OpenBSD have introduced **CUBIC** and **BBR**, and one of these may be selected as the default depending on the version. - OpenBSD often focuses on simple, auditable, and secure code, and this can influence their choice of network algorithms. **NetBSD**: - NetBSD tends to be quite modular and historically has often used **NewReno** as the default. - As with other BSDs, it's possible for users to change the congestion control algorithm depending on the system's configuration and capabilities. **FreeBSD**: - **NewReno** has been the traditional default in FreeBSD, suitable for a wide range of network conditions. - FreeBSD 9.0 and later versions support **CUBIC** and starting from FreeBSD 12.0, **BBR** is also available. Users can change the default congestion control algorithm to CUBIC or others via sysctl settings. Do note that defaults can vary not only by operating system but also by version, build or the specific choices made by a system administrator. Additionally, networking stacks can be updated, or different congestion control algorithms can be preferred by users or system administrators for specific workloads or network conditions. Always refer to the latest documentation or the system's current configuration to determine the default settings for any given environment.

question:Is there a Linux option allowing to send duplicated packages before acknowledgment is received? In a case of network with high packet loss for example.

answer:Linux does not provide a standard kernel option specifically designed to intentionally send duplicate packets for the same data before an acknowledgment is received, as this behavior is not part of the standard TCP protocol. TCP is designed to handle retransmissions based on the lack of acknowledgments or the reception of duplicate acknowledgments, which imply that a packet has been lost. However, for cases where there is an expectation of high packet loss, there are methods to mitigate this. For non-TCP protocols, like UDP, applications frequently implement their own reliability mechanisms, which might include sending duplicate packets. For TCP in high-loss environments, you might consider the following approaches: 1. **Tune Retransmission Timers**: Adjust the TCP retransmission timers to be more aggressive. Linux allows configuring the TCP retransmission timeout (RTO) values via `sysctl` settings. For example, you can reduce the `tcp_retries2` value, which defines the number of times TCP attempts to retransmit a packet before a connection is considered broken. 2. **Select an Appropriate Congestion Control Algorithm**: Choose a congestion control algorithm that responds better to packet loss, such as BBR, which doesn't solely rely on loss detection to adjust its sending rate. 3. **Forward Error Correction (FEC)**: This is a method used in some network protocols (outside of standard TCP) where additional information is sent along with the original data. This can be used to reconstruct lost packets without the need for a retransmission. 4. **Multipath TCP (MPTCP)**: This allows the use of multiple paths for a single TCP connection, which can provide redundancy and potentially overcome packet loss on any single path. 5. **Application-Level Redundancy**: Implement redundancy in the application layer logic, which can include sending important data more than once or using custom protocols over UDP to achieve similar effects. 6. **Use Reliable UDP-based Protocols**: Consider using reliable UDP-based protocols like QUIC, which is designed to be more resilient to packet loss and to recover from loss more quickly than traditional TCP. 7. **Packet Duplication**: For non-critical systems where bandwidth is not a concern, and the goal is simply to mitigate high packet loss, a network appliance or software that duplicates packets over the network could be deployed, but this would be unusual and inefficient. It's important to note that duplicating packets proactively can lead to significant bandwidth waste and potentially exacerbate congestion, so this approach is generally not recommended unless it's a last resort for specific use cases. More commonly, addressing high packet loss involves improving the network infrastructure, using more robust error handling at the application layer, or employing a more suitable protocol for the conditions at hand.

Released under the MIT License.

has loaded