Empirical Performance Analysis on 5G Slicing

This resource note presents empirical measurements of a Quectel 5200 modem utilizing IoT slicing (SST3) in two distinct network conditions. The first condition represents an unloaded network with minimal traffic. The second condition features a heavily loaded network on a different slice, SST1.

The primary objective of this test is to gain a deeper understanding of how network slicing performs in an environment where other network slices are experiencing overload.This resource note details the methodology and presents empirical measurements from a crucial performance evaluation of 5G network slicing, specifically utilizing open hardware and open software components. The test environment focused on assessing the resilience and isolation properties of a designated IoT slice (Service/Slice Type – SST3) under varying network load conditions.

 

1. Hardware and Software Configuration

The core hardware component under investigation was a Quectel 5200 modem, selected for its commercial availability and support for 5G slicing features. The underlying network infrastructure employed an open-source 5G core and RAN (Radio Access Network) solution, emphasizing the objective of validating performance in an open, non-proprietary ecosystem.

 

2. Testing Scenarios and Objectives

The primary objective of this testing campaign was to gain a deeper, quantitative understanding of the performance and isolation capabilities of 5G network slicing, particularly how a low-priority slice is affected when a high-priority, different slice experiences severe congestion. To achieve this, measurements were taken for the SST3 slice under two distinct, controlled network conditions:

2.1 Condition 1: Unloaded Baseline (Minimal Traffic):

    • In this scenario, the network as a whole, including both SST3 and all other available slices (SST1), was operating with minimal background traffic.
    • This condition established the performance baseline for the Quectel 5200 modem on the SST3 slice, reflecting its optimal throughput, latency, and reliability metrics without external interference or resource contention.

2.2 Condition 2: Heavily Loaded Competitor Slice (SST1 Overload):

    • This scenario introduced a controlled, heavy traffic load specifically onto a different slice, SST1, which was provisioned with higher priority than the target SST3 slice. SST1 is typically used for enhanced Mobile Broadband (eMBB) or other high-throughput applications.
    • The traffic injection was designed to simulate a real-world overload or congestion event on the SST1 slice.
    • The critical focus of this test was to measure the resulting performance degradation (or lack thereof) on the low-priority SST3 slice. The data collected will quantify the degree of slice isolation achieved by the 5G core and RAN, indicating whether the severe resource contention on SST1 bleeds over and negatively impacts the dedicated resources of the SST3 slice.

 

3. Expected Outcome

The testing aimed to gather empirical data on the uplink and downlink throughput of the device under test (DUT). This data was collected using the 5G Channel 78 across various transmission bandwidths, which are illustrated in the figure below. A key design objective is to ensure that the performance of the DUT, when utilizing the SST3 slice, remains unaffected by traffic on other slices.

Transmission
Param
M1 (without load)
M1 (High Network Traffic)
10Mhz BW
Uplink
219 Kbits/sec
1.89 Mbits/sec
Downlink
6.96 Mbits/sec
14.8 Mbits/sec
15Mhz BW
Uplink
417 Kbits/sec
888 Kbits/sec
Downlink
11.9 Mbits/sec
9.88 Mbits/sec
20Mhz BW
Uplink
3.83 Mbits/sec
897 Kbits/sec
Downlink
53.2 Mbits/sec
23.4 Mbits/sec
30Mhz BW
Uplink
932 Kbits/sec
414 Kbits/sec
Downlink
56.2 Mbits/sec
49.3 Mbits/sec
40Mhz BW
Uplink
1.91 Mbits/sec
122 Kbits/sec
Downlink
27.1 Mbits/sec
48.6 Mbits/sec
50Mhz BW
Uplink
2.03 Mbits/sec
1.09 Mbits/sec
Downlink
16.3 Mbits/sec
44.7 Mbits/sec
60Mhz BW
Uplink
2.63 Mbits/sec
754 Kbits/sec
Downlink
51.4 Mbits/sec
11.5 Mbits/sec

4. Correlation Analysis

The data shows a clear, inverse, and contradictory correlation between the Downlink and Uplink performance metrics when the network is put under heavy load.

4.1 Primary Correlation (The Inverse Relationship)

There is a noticeable inverse relationship between the Downlink speed and the Uplink speed under stress, suggesting a trade-off is occurring, likely due to resource contention or scheduling bias.

  • Downlink Gain = Uplink Loss: In the majority of test sets (Sets 3, 4, 5, 6, and 7), when the Downlink speed increases or holds stable, the Uplink speed decreases significantly (and vice-versa in Set 1). This is visible in the Change columns:
    • Test Set 3 & 7: When Downlink drops massively ($-29.8$ Mbps and $-39.9$ Mbps), Uplink also drops significantly ($-2.933$ Mbps and $-1.876$ Mbps).
    • Test Set 5 & 6: When Downlink sees huge gains ($+21.5$ Mbps and $+28.4$ Mbps), Uplink sees a notable drop ($-1.788$ Mbps and $-0.94$ Mbps).

4.2 Contradictory Trend (High Traffic Scenario)

The expected outcome of “High Network Traffic” is usually a drop in performance across the board (both DL and UL). However, the results show a contradictory trend:

  • Uplink is always lower under load: For most test cases (Sets 3, 4, 5, 6, 7), the Uplink speed drops when the network is stressed. This indicates resource scarcity or a scheduler prioritizing Downlink traffic.
  • Downlink often improves under load: In three test cases (Sets 1, 5, and 6), the Downlink speed increases substantially (by $+7.84$ Mbps, $+21.5$ Mbps, and $+28.4$ Mbps). This is the key anomalous observation.

5. Key Observation: Downlink Traffic Skew

The most significant correlation is a bias toward Downlink throughput stabilization or improvement, achieved at the cost of Uplink performance, especially in highly asymmetric traffic scenarios.

This often points to:

  • TDD Configuration Bias: If the network is running in Time Division Duplex (TDD) mode, the scheduler may be dynamically assigning more time slots to the Download direction when it detects high demand, thereby throttling the Upload.
  • Buffer/Queue Optimization: The network (or the traffic generator) might be queuing Downlink packets more efficiently under stress, appearing to boost performance, while Uplink queues quickly overflow or receive fewer resource grants from the base station.
  • Load Generator Behavior: It is possible the “High Network Traffic” generator is primarily focused on creating downlink congestion, which naturally causes the uplink signaling (which controls data requests) to fail, leading to lower reported UL data rates.

 

6. Conclusions and Plans 

Our initial measurements indicate a strong correlation between different slicing channels, particularly in the uplink. While the precise cause of this behavior remains unclear, we currently suspect it is linked to the performance characteristics of the open-source 5G system implementation, which may not be fully optimized.

To verify our hypothesis—that the performance issues stem from the open-source system—we plan to conduct a comparative experiment using commercial (proprietary) networking equipment.

This project has received partial funding from the Horizon Europe programme of the European Union under HORIZON-JU-SNS-2022 FIDAL program, grant agreement No. 101096146

en_USEnglish