in Ingeniería Solidaria
Performance of routing and spectrum allocation (RSA) algorithms for a last generation centralized optical network (SDON)
Abstract
Introduction: This article is the product of research “Comparative analysis of the performance of spectrum allocation algorithms (RSA) of a distributed optical network to a centralized optical network limited in resources” developed at the Universidad del Cauca in 2020.
Problem: The growing demand for traffic and the diversity of current services determine the need to implement effective strategies for the allocation of resources, so it is necessary to characterize the behavior of new algorithms against all possible conditions to reduce the uncertainty of their results.
Objective: Comparing performance at the simulation level of RSA algorithms in a resource-constrained centralized OCS optical network.
Methodology: A bibliographic compilation is made of the aspects to be considered in the application of RSA algorithms in a centralized optical network, to propose a simulation scenario in which it is possible to test their performance.
Results: Once the simulation scenario is implemented, the performance of the RSA algorithms is evaluated in different cases of resource availability according to the parameters of blocking probability, end to end delay and number of lost connections.
Conclusion: Through this research it was possible to characterize and apply different RSA algorithms in the most recent generation of Software‐Defined Optical Networks (SDON), which determined the most appropriate way to implement the algorithms for this environment.
Originality: This innovative research has been performed to analyze the impact of resource limitations on the performance of RSA algorithms when they operate on a centralized network.
Limitations: The number of simulations needed and the time required for them.
Main Text
1. INTRODUCTION
Today, advances in mobile technologies, data center networks, cloud computing and the rise of new services have generated numerous bandwidth needs and connection requests, which require a more scalable, agile and robust network infrastructure [1]; therefore, the telecommunications sector has proposed a new type of network called the Software Defined Elastic Optical Network (SD-EON) [2].
On the one hand, Software Defined Networks (SDN) allow for a centralized control of the network [3], [4], [31] and, on the other hand, Elastic Optical Networks (EON) provide efficiency in resource management, because the spectrum of each link can be divided into segments (defined by a central frequency and segment width) of smaller size and assigned according to the requirements of each connection [5]. However, the last ones pose restrictions that must be met at the network level, specifically in the efficient establishment of connections and spectrum allocation to traffic demands [1]. Therefore, these networks seek to solve the problem of routing and spectrum allocation (RSA), which consists of (i) finding the appropriate route between a pair of nodes (source and destination) and (ii) assigning the number of slots needed to establish a connection [6], [27].
To obtain a solution to the RSA problem, two restrictions must be met, which are (i) contiguity, which requires that the number of slots assigned to a connection are consecutive and (ii) continuity, which conditions that the slots assigned to a connection are the same along the selected route [7], [8]. To meet these constraints, integer linear programming (ILP) and heuristic algorithms have been proposed, where the last one can be considered as the most promising solution against RSA, since they seek to obtain an efficient solution in a short time [9], [28].
Consequently, the purpose of this research is to analyze, at the simulation level, the performance of two RSA heuristic algorithms, a classic Shortest Path-First Fit and an institutional Shortest Path-Péndulo, over a centralized network environment limited in resources.
1.1 Research background review
This section presents previous research or works that were taken as theoretical references regarding optical networks and spectrum routing and assignment algorithms.
R. de Paz Villarroel [3], makes an exhaustive summary of the SDN paradigm and explains its inclusion within optical networks, so that he presents all the theory concerned to understand its functioning and the benefits of its implementation in network management.
Shakya [1], performs a general review of the problems that networks, based on Wavelength Division Multiplexing (WDM), have in the allocation of resources and explains the disadvantages of this technology compared to current requirements; then, the author describes EON as an effective solution to these problems, in addition, he presents a comprehensive description of its operation, its limitations and its challenges with emphasis on the management of spectrum resources; for this reason, he focuses his study on RSA algorithms. Finally, the author evaluates some RSA-type algorithms and, according to the results obtained, he proposes a new algorithm focused on spectrum defragmentation, in order to mitigate the existing problems.
Lin, He, and Liu [10], propose an RSA algorithm called “Hop and Consecutiveness-based Routing and Fragmentation-aware Spectrum Assignment” (HCR-FSA) as a variant solution to the assignment problem in SD-EON, then describes its operation to later determine its performance. They implement it in a simulation scenario based on the topology of the NSFNeT network with a single control node.
Talebi, Alam, Katib, Khamis, Salama, and Rouskas [11], present some limitations of EON versus orthogonal division multiplexing (OFDM) technology, then describe and classify different RSA techniques for spectrum management, allowing them to conclude that, although there are many algorithms and much research about spectrum allocation, many questions remain to be solved.
Chatterjee, Sarma, and Oki [8], focus their research on the RSA problem, starting with the conceptualization and characterization of EON; then they analyze the concept and characteristics of RSA in EON and discuss the differences between Routing and Wavelength Assignment (RWA) and RSA, their routing approaches and assignment policies; finally, they present some research and experimental demonstrations they have performed to test the functionality of an EON.
Robles [12], presents a detailed theoretical description of the metaheuristic method: by Ant Colony Optimization (ACO); then, makes a review of the algorithms used in the ACO, the application of this method in various optimization problems and highlights the excellent results that have been obtained with this method, especially in network routing.
Arias y Roa [13], describe the RWA and RSA problems in optical networks, making a comparison between these two problems and a collection of information about both static (offline) and dynamic (online) spectrum allocation algorithms; then, they propose the solution of the RSA problem from the optimization perspective, showing great interest in ILP algorithms, since they try to find the best possible solution and not a time-efficient solution, as heuristic algorithms do.
Batham, Yadav, and Prakash [14], raise two topics related to spectrum allocation in EON: the first is the non-uniform allocation of spectrum along the path links and the second is the fragmentation of spectrum in the network. To address these issues, the authors propose two RSA strategies: Least Loaded RSA (LLRSA) and Route Fragmentation Aware RSA (RFARSA); then, they develop a simulation program to evaluate and compare the performance of the previously proposed strategies and those chosen by the authors. Finally, through the results obtained in the simulation, they conclude that the two proposed strategies (LLRSA and RFARSA) exceed the performance of the conventional strategies.
Sevilla and Zúñiga [15], make a characterization of optical networks based on WDM and FlexGrid, as well as RWA and RSA problems; then, they implement as a simulation, a distributed optical network model to evaluate the behavior of an RWA (WDM) and RSA (FlexGrid) algorithm, taking into account the blocking probability and end-to-end delay metrics. Finally, they present the analysis of simulation results and conclude that the optical network implementing the RSA algorithm has a better performance compared to the network applying the RWA algorithm.
Pérez-Almeida and Puerto-Leguizamón, propose the design of a long-distance optical network with a PON (Passive Optical Network) approach and multichannel capabilities, which is reconfigurable by means of optical switches and AWG (Arrayed Waveguide Gratings).
Banavides and Quinayás [16], analyze the performance between a distributed OPS network and a centralized OPS network in terms of blocking probability, end-to-end delay and packet loss. They start the research by describing the theoretical component of distributed and centralized networks, then describe the simulation scenarios and explain their performance. Finally, they present the results with which they conclude that the centralized network had a better performance.
2. MATERIALS AND METHODS
This section presents a description of the OMNeT++ work environment and the educational tool “Simulator WRON”, which have been implemented to propose a methodology to carry out this research.
2.1 OMNeT++
It is an extensible working environment based on a modular construction through simple or composite components that can be developed using NED language and programmed using the C++ object-oriented language. OMNeT ++ has been used in numerous research fields, from queue network simulations to wireless and ad-hoc network simulations, from business process simulations to peer-to-peer network simulations, as well as in optical switches and storage area networks. For the creation of the different networks mentioned, this platform provides the conformation of large modules through small components that communicate through control messages [17].
2.2 WRON Simulator
The WRON (Wavelength-Routed Optical Network) simulator is an open educational tool developed in the OMNeT++ IDE by the optical communications group of the University of Valladolid [18]. This tool contains a test environment based on the NSFNeT network topology, in which the fourteen nodes that compose it are distributed unevenly in different cities of the United States [19], as shown in Figure 2 where all distances are in kilometers.
2.3 Methodology
The model of [20] and [21] is adapted to allow the development of simulations of a discrete environment and includes the following stages: i) System definition: characterize the internal and/or external interactions of the components of the simulation environment through an analysis that identifies the relationships, constraints and variables involved ii) Environment formulation: the environment is defined and designed based on the relationships identified iii) Data collection: identify the data required by the environment to obtain the desired results iv) Environment implementation: adapt the simulation tool based on the previous analysis v) Validation: determine the shortcomings in the formulation of the environment or the variables involved vi) Interpretation: analyze the results in order to provide feedback to the environment and correct possible errors to obtain better results vii) Documentation: include a description of the processing and interpretation of the data and the environment developed, [32].
In Figure 3 you can see the flow chart with which the simulations of this research will be carried out.
3. RESULTS
To evaluate RSA algorithms in a centralized environment, it is first necessary to define a simulation scenario that characterizes the behavior of the network.
3.1 Simulation Scenario
The simulation scenario is developed from the WRON Simulator, which is composed of three main modules: request source, edge node and control node. Originally, this simulator operates under the assignment of wavelengths from a source node to a destination node [22], so it was necessary to adapt its way of allocating resources to be done in a spectral way, following the ITU-T Recommendation G.694.1 [23], from which were taken the values of slot bandwidth (slot: 12.5 GHz) and center frequency for each service.
The scenario shown in Figure 4 has been configured to offer 3 types of service, among which, the request source node, following a uniform distribution, can choose one of them for each connection: Service 1: 12.5 GHz, Service 2: 25 GHz and Service 3: 37.5 GHz.
The process of a connection request starts at the request source, where requests are generated according to an interval, which is obtained by relating the number of nodes and the volume of traffic present. These requests are composed of: the source address, the destination address (which is chosen from a uniform distribution), the requested bandwidth and the connection duration, which is obtained randomly by means of an exponential distribution. Then, the edge node receives the request and its function is simply to forward it to the control node. Once the request arrives at the central node, it is processed by the RSA algorithm, which determines whether or not the request can be established according to the availability of resources in the status table.
Two-stage RSA algorithms are implemented in the control node, so that the routing problem is solved first followed by the spectrum allocation problem. Thus, the routing of connection requests is done by applying an alternative routing algorithm called Shortest Path that takes into account the number of jumps and a set of shorter routes ‘k’ organized according to the Bubble Ordering. In this way, the algorithm organizes the routes from lowest to highest, according to the number of jumps, and begins to analyze the availability of the slots required in the first route through two study algorithms: First Fit and Péndulo. If the resources are not found, their availability is analyzed in the following routes until the request is established or rejected.
3.1.1 First Fit Spectrum allocation algorithm
The spectrum management in the first case is done by the First Fit algorithm, which starts the search for the required slots by going through the spectrum grid from the lowest to the highest indexed slots. Figure 5 shows an example of how the algorithm works when it is going to serve a connection request that requires two consecutive slots (25 GHz). It should be noted that this algorithm rejects the connection request if the number of slots required is not available [7], [8].
The First Fit algorithm is considered one of the best spectrum allocation algorithms due to its good results and low calculation complexity, so it has been chosen as one of the study algorithms and has been taken as a point of comparison to evaluate the performance of the institutional algorithm, Péndulo, in a scenario of limited resources.
As mentioned above, for the routing of connections a set ‘k’ of shorter paths is taken into account, on which the allocation algorithm will seek the required slots. For this reason, Figure 6 presents the flowchart of the implemented RSA algorithm, Shortest Path-First Fit (SP-FF).
Below is the pseudocode corresponding to the Shortest Path-First Fit algorithm, when it operates with an availability of 25% of the routes.
3.1.2 Péndulo spectrum assigment algorithm
The spectrum management in the second case is done by an algorithm that has been recently developed at the University of Cauca. This algorithm has been called Péndulo, since, as its name indicates, it makes a pendular movement that runs the spectrum grid from its alternating ends towards its center, in search of the slots needed to meet the connection request [25], [29]. Figure 8 shows how the algorithm works when it must meet a request that requires two slots (25 GHz) in a grid of 100 GHz (8 slots).
First, the algorithm examines in which direction it should go through the grid. If it should go from left to right then it evaluates if the slots with the lowest index are available (Step 1). If they are not, the algorithm changes direction and examines the slots from right to left, so that it checks the availability of the last slots in the grid (Step 2), but if it does not find them available either, it starts a new review cycle by reducing its pendulum movement and evaluating the availability on slots closer to the center of the grid (Steps 3 and 4) [26]. It should be noted that this algorithm ends when it reaches its center or when it finds the slots needed to meet the request.
Figure 9 shows the flowchart of the implemented RSA algorithm, Shortest Path-Péndulo (SP-Péndulo).
Below is the pseudocode corresponding to the Shortest Path- Péndulo algorithm, when it operates with an availability of 25% of the routes.
The simulation scenario has been limited in terms of resources, so it has a grid of 4 slots and a percentage of routes according to each case study presented in Figure 11.
The percentage of routes to be evaluated was determined after several detailed simulations in which it was found that it is not relevant to evaluate all routes (100%), since connections can only be established in a small percentage. To carry out this analysis, two nodes were taken into account, one with the highest number of links and the other with the lowest. This was done to observe the behavior when all nodes were generating requests to them, or from one to another. Later on, it was shown that in both cases, when the incoming or outgoing links of the source or destination nodes were busy, it did not matter how many routes you had because there was no way to establish any kind of connection. Therefore, a common number of links present in each node was taken and it was found that this number was within 25% of the routes, so it was chosen as the best case and for the worst case, when only one route was available, where the network goes from having an alternative routing to a fixed routing.
3.2 Execution of the simulations
To carry out the simulations, the parameters present in Table 1 were taken into account.
With the configuration of these parameters, the performance of the RSA algorithms under study was obtained, taking into account three metrics which are: blocking probability (BP), end-to-end delay (EED) and number of lost connections. The BP is determined by the relationship between rejected connections and total connections, the EED is calculated by taking into account the time it takes to establish a request from a source to a destination according to the selected route, and the number of connections lost is obtained by taking into account the number of total connections generated, minus the sum of established and rejected connections.
Below, the results obtained by the algorithms are presented individually against the substudies of cases presented in Figure 11.
3.2.1 Shortest Path-First Fit algorithm
In Figure 12, it is possible to verify the hypothesis raised about the percentage of routes that have been chosen to limit resources, since the blocking probability does not vary after a certain number of routes while the delay does. This is observed in the cases of 10 and 25% where the probability of blocking is practically the same and in the delays there is a significant difference.
Both in the blocking probability and in the delays, the relationship between the results and the number of routes is evident. For the blocking probability, it is an inverse relationship where the greater number of routes obtains better results, while for the delay it is a direct relationship where the greater number of routes obtains a higher transmission time, because there are more establishment options; but they imply a greater number of jumps in the routes. The instability present in the delay is due to: resource limitations, the way in which the routes are organized (from less to more according to the number of jumps), traffic overload, dynamism in the generation time and the duration of the requests, since all these factors modify the selection of the routes at each instant of time.
3.2.2 Shortest Path-Péndulo algorithm
The results obtained with the SP-Péndulo algorithm maintain the relationships described above for the blocking probability and the delay, with respect to the percentage of available routes. In addition, it is clear that the blocking probability in the cases of 10 and 25% is practically the same, which once again confirms the hypothesis about the choice of the maximum percentage of routes to be evaluated.
3.2.3 Comparison of SP-FF and SP-Péndulo algorithms in the centralized network
For this case, the results obtained from the allocation algorithms in the centralized NSFNeT network are compared with the sub cases established in Figure 11.
· Loss of connections in the centralized network
In Figure 16, it can be seen that the number of lost connections in a centralized network is high, because the control node does not have a queuing algorithm to store requests in the order of arrival and then attend to them according to the availability of resources; therefore, most requests were lost because the control node was forced to execute simultaneous processes such as releasing resources or the arrival of a new request, when another one was being attended to.
It is evident that the SP-FF algorithm performs better than the SP-Péndulo algorithm when establishing connections in all the situations that were tested. This is due to the specific operating characteristics of the SP-Péndulo algorithm, since its complexity generates a greater processing delay that implies a high loss of connections, because it works on an environment limited in resources and with traffic overload.
· 25% of the routes
Figure 17 shows that the blocking probability of the SP-Péndulo algorithm presents better results when there are low and medium traffic loads (0.3-0.7), since when there is high traffic (0.8 and 0.9) the blocking probability rises to exceed that of the SP-FF algorithm. This occurs because in the centralized network at low traffic load, the SP-Péndulo algorithm has enough time to process and accept requests, while for high loads its performance is low because more simultaneous requests are generated, leading to the algorithm rejecting many of them.
As for the delay, in Figure 18 it can be seen that the results obtained for a low traffic load (0.1-0.3) are very similar in both algorithms. For medium and high traffic loads (0.4-0.9) the impact of the increase in traffic volume is evident, since the change of slope in both algorithms is accentuated. However, the SP-Péndulo algorithm obtains the best performance because this algorithm loses more requests, which decreases the network overload and provides more resources available to allocate the next requests in routes with fewer jumps. In addition, it is observed that the difference decreases as the load increases, indicating that the delay of the SP-FF algorithm improves for high loads.
· 10% of the routes
Figure 19 shows that the blocking probability starts being equal in both algorithms for low traffic loads, while for medium loads it is the SP-Péndulo algorithm that presents better results, since the increase in traffic volume causes an increase in the number of lost connections.
Figure 20 shows that the results obtained with the RSA algorithms for low traffic loads (0.1-0.2) are very similar, while for an average traffic load (0.3-0.7), a better performance is evident for the SP-Péndulo algorithm. This is because, as the traffic volume increases, the institutional algorithm loses a greater number of requests and provides more resources for the establishment of the next ones. For high loads (0.8 and 0.9), it is the First Fit algorithm that presents the best performance, because it starts to establish the requests on the routes with fewer jumps. The instability of the delay becomes evident in the abrupt changes of slope that are observed between different loads. This is due to the implication that the volume of traffic has in the generation of connections and in the establishment of the routes in each instant of time.
· 5% of the routes
As in the previous cases, Figure 21 shows that the RSA algorithms obtain similar blocking probability results for low traffic loads (0.1-0.3) and as traffic increases (0.4-0.7), better performance is evidenced for the SP-Péndulo algorithm. However, for the most critical cases (0,8-0,9), it is the SP-FF algorithm that is most stable and obtains the best results. This is due to the loss of connections and the effect that the traffic volumes (low, medium and high) have on the allocation capacity of the algorithms when they operate in such a small spectrum grid.
In this case, where 5% of the available routes are evaluated, the most unstable delay results are obtained for both algorithms, since in this case the traffic load has a significant implication in the increase of the number of lost connections and at the same time in the election of different available routes in each instant of time.
· 1 route
In this case the spectrum allocation is strictly evaluated, as only one route is available. Thus, Figure 23 shows that the SP-Péndulo algorithm obtains the best results in terms of blocking probability for all traffic loads, although the difference remains small. Unlike the previous cases, this is the case with the greatest limitation of resources, where both algorithms denote stability in the results, since they do not show transposition between them and demonstrate that traffic overload generates instability when the allocation of resources depends on the number of available routes.
As for the delay, Figure 24 shows that when a single path is available between a source and a destination node, its value remains constant since the processing delay of the algorithms is negligible compared to the transmission delay. Furthermore, in this case, it is verified that the variation in the end-to-end delay results is independent of the traffic variation, since the instability that previously occurred was due to the fact that different routes could be selected and, therefore, such chaotic variations were obtained.
4. DISCUSSIONS AND CONCLUSIONS
In the present article, the performance analysis of two RSA algorithms for a centralized OCS network environment was performed with respect to the probability of blocking, end-to-end delay and number of lost connections when resources were limited. The following are recommendations and conclusions regarding the most important aspects of the research work carried out.
4.1 Discussions
From the different works related to the research topic, it is worth mentioning that Shakya in [1] starts his research talking about the importance of elastic optical networks based on the limitations that WDM technology presents in the allocation of resources, which became evident in the first tests with the simulator, since that was its way of allocation at the beginning. In addition, some of the benefits that R. de Paz Villarroel exposes in his research [3] on the management of centralized networks were verified, which was proven by making adjustments to the simulator so that its resource allocation was spectral.
Benavides and Quinayás in [16], determine that a centralized OPS network performs better than a distributed OPS network, while Sevilla and Zúñiga in [15], determine that RSA algorithms perform better than RWA algorithms when implemented in a distributed OPS network operating under high resource availability. However, this research has compared the performance of RSA algorithms operating in a centralized OCS network environment.
Lin, He, and Liu in [10] propose a spectrum allocation algorithm, while Batham, Yadav, and Prakash in [14] propose two different strategies for spectral resource allocation, both of which are based on the limitations each group encountered in conventional ways of allocation. Unlike this research, the results were obtained in environments operating with high resource availability.
This research work contributes to what was expressed by the authors in [11], because it seeks to reduce the unresolved questions regarding the spectral allocation of resources, so it is considered necessary to determine the impact that resource limitation has on the performance of spectrum allocation algorithms in a centralized OCS optical network.
4.2 Conclusions
According to the results obtained, it is evident that high values were obtained in terms of blocking probability and loss of connections, albeit unstable with respect to end-to-end delay. The high values of blocking probability are due to the saturation of the network caused by the limitation of resources and by the generation of random requests regarding the three types of service, since there is a grid with availability of 4 slots in each link and the services demand occupation of 25, 50 or 75% of the grid in each jump for a given time. As for the loss of connections, its results are related to the overload of the control node since it is the one in charge of establishing, rejecting or releasing the requests in each instant of time. With respect to the instability obtained in the delay values, it is related to the dynamism in the choice of routes for each request, since, having a limited grid, the algorithms are forced to look for the availability in the different routes, which constantly varies the general delay since each route has a different delay.
With respect to the algorithms, both obtained good results. However, despite the fact that the Péndulo algorithm showed the best results in terms of blocking probability and end-to-end delay, it was the one that lost the most connections and, therefore, its other metrics were indirectly affected. Therefore, it is concluded that the First Fit algorithm was the one that performed best against the exposed conditions.
Finally, it is considered that a centralized network without a request queuing protocol is not appropriate for implementation in resource-constrained environments or for services that require high data fidelity. As far as algorithms are concerned, the implementation of the Péndulo for high information fidelity services should be avoided, while First Fit is considered suitable for operation in most cases.
Abstract
Main Text
1. INTRODUCTION
1.1 Research background review
2. MATERIALS AND METHODS
2.1 OMNeT++
2.2 WRON Simulator
2.3 Methodology
3. RESULTS
3.1 Simulation Scenario
3.1.1 First Fit Spectrum allocation algorithm
3.1.2 Péndulo spectrum assigment algorithm
3.2 Execution of the simulations
3.2.1 Shortest Path-First Fit algorithm
3.2.2 Shortest Path-Péndulo algorithm
3.2.3 Comparison of SP-FF and SP-Péndulo algorithms in the centralized network
· Loss of connections in the centralized network
· 25% of the routes
· 10% of the routes
· 5% of the routes
· 1 route
4. DISCUSSIONS AND CONCLUSIONS
4.1 Discussions
4.2 Conclusions