<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.29 (Ruby 3.4.4) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-li-nmrg-dtn-data-generation-optimization-04" category="info" consensus="true" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.29.0 -->
  <front>
    <title abbrev="Data Generation and Optimization for Network Digital Twin">Data Generation and Optimization for Network Digital Twin</title>
    <seriesInfo name="Internet-Draft" value="draft-li-nmrg-dtn-data-generation-optimization-04"/>
    <author fullname="Mei Li">
      <organization>China Mobile</organization>
      <address>
        <postal>
          <city>Beijing</city>
          <country>China</country>
        </postal>
        <email>limeiyjy@chinamobile.com</email>
      </address>
    </author>
    <author fullname="Cheng Zhou">
      <organization>China Mobile</organization>
      <address>
        <postal>
          <city>Beijing</city>
          <country>China</country>
        </postal>
        <email>zhouchengyjy@chinamobile.com</email>
      </address>
    </author>
    <author fullname="Danyang Chen">
      <organization>China Mobile</organization>
      <address>
        <postal>
          <city>Beijing</city>
          <country>China</country>
        </postal>
        <email>chendanyang@chinamobile.com</email>
      </address>
    </author>
    <author fullname="Qin Wu">
      <organization>Huawei</organization>
      <address>
        <email>bill.wu@huawei.com</email>
      </address>
    </author>
    <author fullname="Yuanyuan Yang">
      <organization>Huawei</organization>
      <address>
        <email>yangyuanyuan55@huawei.com</email>
      </address>
    </author>
    <date year="2025" month="July" day="07"/>
    <area>IRTF</area>
    <workgroup>Network Management</workgroup>
    <keyword>network digital twin</keyword>
    <keyword>data generation</keyword>
    <keyword>data optimization</keyword>
    <abstract>
      <?line 80?>

<t>Network Digital Twin (NDT) can be used as a secure and cost-effective environment for network operators to evaluate network in various what-if scenarios. Recently, Artificial Intelligence (AI) models, especially neural networks, have been applied for NDT modeling. The quality of deep learning models mainly depends on two aspects: model architecture and data.  This memo focuses on how to improve the model quality from the data perspective.</t>
    </abstract>
  </front>
  <middle>
    <?line 84?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>Digital twin is a virtual instance of a physical system (twin) that is continually updated with the physical system's performance, maintenance, and health status data throughout the physical system's life cycle. Network Digital Twin (NDT) is a digital twin that is used in the context of networking <xref target="I-D.irtf-nmrg-network-digital-twin-arch"/>. NDT can be used as a secure and cost-effective environment for network operators to evaluate network in various what-if scenarios. NDT is applicable to various types of networks, such as wireless networks, optical networks, data center networks, Internet of Things (IoT) networks, and vehicular networks.</t>
      <t>Artificial Intelligence (AI) models, particularly neural networks (NNs), have proven to be highly effective in modeling complex network environments for various applications, including performance evaluation, traffic prediction, resource allocation, and service self-healing. AI-driven network modeling facilitates the creation of real-time, lightweight, and highly accurate NDT.</t>
      <t>The quality of AI models mainly depends on two aspects: model architecture and data. The role of data has recently been highlighted by the emerging concept of data-centric AI <xref target="Data-Centric-AI"/>. This memo focuses on the impact of training data on the model. The quality of training data will directly affect the accuracy and generalization ability of the model. This memo focuses on how to design data generation and optimization methods for NDT modeling, which can generate simulated network data to solve the problem of practical data shortage and select high-quality data from various data sources. Using high-quality data for training can improve the accuracy and generalization ability of the model.</t>
    </section>
    <section anchor="acronyms-and-abbreviations">
      <name>Acronyms and Abbreviations</name>
      <t>NDT: Network Digital Twin</t>
      <t>AI: Artificial Intelligence</t>
      <t>AIGC: AI-Generated Content</t>
      <t>ToS: Type of Service</t>
      <t>OOD: Out-of-Distribution</t>
      <t>FIFO: First In First Out</t>
      <t>SP: Strict Priority</t>
      <t>WFQ: Weighted Fair Queuing</t>
      <t>DRR: Deficit Round Robin</t>
      <t>BFS: Breadth-First Search</t>
      <t>CBR: Constant Bit Rate</t>
    </section>
    <section anchor="requirements">
      <name>Requirements</name>
      <t>The modeling performance is vital in NDT, which is involved in typical network management scenarios such as planning, construction, operation, optimization, and operation.  Recently, some studies have applied AI models to NDT modeling, such as RouteNet <xref target="RouteNet"/>, MimicNet <xref target="MimicNet"/> and m3 <xref target="m3"/>. AI is a data-driven technology whose performance heavily depends on data quality.</t>
      <t>Data-centric AI <xref target="Data-Centric-AI"/> shifts the focus from model architecture to improving data through various techniques such as data augmentation, self-supervision, data cleaning, data selection, and data privacy. For example, data augmentation can create additional augmented samples. Self-supervised models can be developed without the need for manual labels or features. Data selection methods can help identify the most valuable samples.</t>
      <t>In many cases, network data sources are diverse and of varying quality, making it difficult to directly serve as training data for NDT AI models:</t>
      <ul spacing="normal">
        <li>
          <t>Practical data from production networks: Data from production networks usually have high value, but the quantity, type, and accuracy are limited. Moreover, it is not practical in production networks to collect data under various configurations;</t>
        </li>
        <li>
          <t>Network simulators: Network simulators (e.g., NS-3 and OMNeT++) can be used to generate simulated network data, which can solve the problems of quantity, diversity, and accuracy to a certain extent. However, simulation is usually time-consuming. In addition, there are usually differences between simulated data and practical data from production networks, which hinders the application of trained models to production networks;</t>
        </li>
        <li>
          <t>Generative AI models: With the development of AI-Generated Content (AIGC) technology, generative AI models (e.g., GPT and LLaMA) can be used to generate simulated network data, which can solve the problems of quantity and diversity to a certain extent. However, the accuracy of the data generated by generative AI models is limited and often has gaps with practical data from production networks.</t>
        </li>
      </ul>
      <t>Therefore, data generation and optimization methods for NDT modeling are needed, which can generate simulated network data to solve the problem of practical data shortage and select high-quality data from multi-source data. High-quality data meets the requirements of high accuracy, diversity, and fitting the actual situation of practical data. Training with high-quality data can improve the accuracy and generalization of NDT performance models.</t>
    </section>
    <section anchor="framework-of-data-generation-and-optimization">
      <name>Framework of Data Generation and Optimization</name>
      <t>The framework of data generation and optimization for NDT modeling is shown in Figure 1, which includes two stages: the data generation stage and the data optimization stage.</t>
      <figure anchor="kelem">
        <name>Framework of Data Generation and Optimization for NDT</name>
        <artwork align="center"><![CDATA[
          Data generation                   Data optimization
   +---------------------------+ +-------------------------------------+
   |                           | |                                     |
   | +---------+               | |              +---------+            |
   | |         |               | | +----------+ |         |            |
   | | Network |               | | | Practical| | Easy    |            |
   | | topology| +-----------+ | | | data     | | samples |            |
   | |         | |           | | | +-----+----+ |         |            |
   | |         | | Network   | | |       |      |         | +--------+ |
   | |         | | simulator | | | +-----v----+ |         | |        | |
   | | Routing | |           | | | |          | | Hard    | | High   | |
   | | policy  +->           +-+-+-> Candidate+-> samples +-> quality| |
   | |         | |           | | | | data     | |         | | data   | |
   | |         | | Generative| | | |          | |         | |        | |
   | |         | | AI model  | | | +----------+ |         | +--------+ |
   | | Traffic | |           | | |              | OOD     |            |
   | | matrix  | +-----------+ | |              | samples |            |
   | |         | Data generator| |              | (remove)|            |
   | +---------+               | |              |         |            |
   |  Network                  | |              +---------+            |
   |  configuration            | |             Data selection          |
   |                           | |                                     |
   +---------------------------+ +-------------------------------------+
]]></artwork>
      </figure>
      <section anchor="data-generation-stage">
        <name>Data Generation Stage</name>
        <t>The data generation stage aims to generate candidate data (simulated network data) to solve the problem of the shortage of practical data from production networks.  This stage first generates network configurations and then imports them into data generators to generate the candidate data.</t>
        <ul spacing="normal">
          <li>
            <t>Network configurations: Network configurations typically include network topology, routing policy, and traffic matrix.  These configurations need to be diverse to cover as many scenarios as possible. Topology configurations include the number and structure of nodes and edges, node buffers' size and scheduling strategy, link capacity, etc.  Routing policy determines the path of a packet taking from the source to the destination. The traffic matrix describes the traffic entering/leaving the network, and leaving the footprint in the paths of the network which includes the traffic's source, destination, time and packet size distribution, Type of Service (ToS), etc.</t>
          </li>
          <li>
            <t>Data generators: Data generators can be network simulators (e.g., NS-3 and OMNeT++) and/or the generative AI models (e.g., GPT and LLaMA). Network configurations are imported into data generators to generate candidate data.</t>
          </li>
        </ul>
      </section>
      <section anchor="data-optimization-stage">
        <name>Data Optimization Stage</name>
        <t>The data optimization stage aims to optimize the candidate data from various sources to select high-quality data.</t>
        <ul spacing="normal">
          <li>
            <t>Candidate data: Candidate data includes simulated network data generated in the data generation stage and the practical data from production networks.</t>
          </li>
          <li>
            <t>Data selection: The data selection module investigates the candidate data to filter out the easy, hard, and Out-of-Distribution (OOD) samples. Hard examples refer to samples that are difficult for the model to accurately predict. During the training process, exposing the model to more hard examples will enable it to perform better on such samples later on.  Then the easy samples and hard samples are considered valid samples and added to the training data. OOD samples are considered invalid and removed.</t>
          </li>
          <li>
            <t>High-quality data: High-quality data needs to meet the requirements of high accuracy, diversity, and fitting the actual situation of practical data, which can be verified by expert knowledge (such as the ranges of delay, queue utilization, link utilization, and average port occupancy).</t>
          </li>
        </ul>
      </section>
    </section>
    <section anchor="data-generation">
      <name>Data Generation</name>
      <t>This section will describe how to generate network configurations, including network topology, routing policy, and traffic matrix. Then these configurations will be imported into data generators to generate the candidate data.</t>
      <section anchor="network-topology">
        <name>Network Topology</name>
        <t>Network topologies are generated using the Power-Law Out-Degree algorithm, where parameters are set according to real-world topologies in the Internet Topology Zoo.</t>
        <t>When the flow rate exceeds the link bandwidth or the bandwidth set for the flow, the packet is temporarily stored in the node buffer. A larger node buffer size means a larger delay and possibly a lower packet loss rate. The node scheduling policy determines the time and order of packet transmission, which is randomly selected from the policies such as First In First Out (FIFO), Strict Priority (SP), Weighted Fair Queuing (WFQ), and Deficit Round Robin (DRR).</t>
        <t>A larger link capacity means a smaller delay and less congestion. To cover diverse link loads to get good coverage of possible scenarios, we set the link capacity to be proportional to the total average bandwidth of the flows passing through the link.</t>
      </section>
      <section anchor="routing-policy">
        <name>Routing Policy</name>
        <t>Routing policy plays a crucial role in routing protocols, which determines the path of a packet from the source to the destination.</t>
        <ul spacing="normal">
          <li>
            <t>Default: We set the weight of all links in the topology to be the same, that is, equal to 1. Then we use the Dijkstra algorithm to generate the shortest path configuration. Dijkstra algorithm uses Breadth-First Search (BFS) to find the single source shortest path in a weighted digraph.</t>
          </li>
          <li>
            <t>Variants: We randomly select some links (the same link can be chosen more than once) and add a small weight to them. Then we use the Dijkstra algorithm to generate a series of variants of the default shortest path configuration based on the weighted graph. These variants can add some randomness to the routing configuration to cover longer paths and larger delays.</t>
          </li>
        </ul>
      </section>
      <section anchor="traffic-matrix">
        <name>Traffic Matrix</name>
        <t>The traffic matrix is very important for network modeling. The traffic matrix can be seen as a network map, which describes the traffic entering/leaving the network, including the source, destination, distribution of the traffic, etc.</t>
        <t>We generate traffic matrix configurations with variable traffic intensity to cover low to high loads.</t>
        <t>The parameters packet sizes, packet size probabilities, and ToS are generated according to the validation dataset analysis to have similar distributions.</t>
        <t>The arrival of packets for each source-destination pair is modeled using one of the time distributions such as Poisson, Constant Bit Rate (CBR), and ON-OFF.</t>
      </section>
    </section>
    <section anchor="data-optimization">
      <name>Data Optimization</name>
      <t>This section will describe how to optimize the data from various sources to filter out high-quality data, which includes the seed sample selection phase and incremental optimization phase.</t>
      <t>Candidate data includes simulated network data generated in the data generation stage and real data from production networks. Data optimization supports a variety of selection strategies, including high fidelity, high coverage, etc. High fidelity means that the selected data can fit the real data (e.g., having similar topologies, routing policies, traffic models, etc.), and high coverage means that the selected data can cover as many scenarios as possible.</t>
      <section anchor="seed-sample-selection-phase">
        <name>Seed Sample Selection Phase</name>
        <t>In the seed sample selection phase, high-quality seed samples are selected through the following steps to provide high-quality initial samples for the incremental optimization phase.</t>
        <t>STEP 1: Training feature extraction model and feature extraction.</t>
        <t>(1.1) The training data D' is selected from the candidate data D according to the selection strategy.  For the high fidelity strategy, the real data is used as the training data D'; for the high coverage strategy, the real data and simulated data are used together as the training data D'.</t>
        <t>(1.2) Feature extraction model E is trained using the training data D'. Feature extraction model E is a network performance evaluation model that can be used to evaluate performance indicators such as delay, jitter and packet loss (such as RouteNet).</t>
        <t>(1.3) Use the feature extraction model E obtained in STEP (1.2) to extract the feature of the training data D' obtained in STEP (1.1). A network can be defined as a set of flow F, queue Q, and link L. The link state SF (such as link utilization), queue state SQ (such as port occupation), and flow state SL (such as delay, throughput, packet loss, etc.) are taken as features. Each sample in the training data D' is converted to a feature vector [SF,SQ,SL].</t>
        <t>STEP 2: Clustering.</t>
        <t>Cluster the training data D' after feature extraction. Clustering (such as K-means and DBSCAN) is an unsupervised machine learning technique that can automatically discover the natural groups in the data, divide the data into multiple clusters, and the samples in the same cluster have similarities.</t>
        <t>Repeat STEP 3 and STEP 4 until all clusters have been traversed.</t>
        <t>STEP 3: Calculating cluster centers and nearest neighbors.</t>
        <t>(3.1) Calculate cluster centers. The method of calculating cluster centers is determined according to the clustering algorithm used in STEP 2. For example, using K-means clustering algorithm, the cluster center is calculated by finding the average of all data points in the cluster. These cluster centers are added to the seed dataset DS.</t>
        <t>(3.2) Calculate k nearest neighbors of each cluster center and add them to the seed dataset DS.  Suitable nearest neighbor calculation methods can be used, such as Euclidean distance, cosine distance, etc.</t>
        <t>STEP 4: Expert knowledge verification.</t>
        <t>(4.1) Expert knowledge can be used to verify the validity of samples through the range of indicators such as delay, queue occupation, and link utilization.  If the verification passed, go to STEP 3. Otherwise, go to STEP (4.2).</t>
        <t>(4.2) Randomly select m samples from the seed dataset DS and remove them. Calculate the nearest neighbors of the removed m samples, add them to the seed data set DS, and go to STEP (4.1).</t>
      </section>
      <section anchor="incremental-optimization-phase">
        <name>Incremental Optimization Phase</name>
        <t>The seed samples are taken as the initial training dataset. The filter model investigates the remaining candidate samples to filter out the easy, hard and OOD samples. Then the easy samples and hard samples are added to the training dataset. These processes are repeated to iteratively optimize the filter model and the training data until the high-quality data meets the constraints.</t>
        <ul spacing="normal">
          <li>
            <t>Easy samples: Easy samples are data points where the model's predictions align closely with the true labels, often with high confidence. While training on easy samples can lead to good performance on familiar data, relying solely on them may limit the model's ability to handle complex or ambiguous cases, potentially causing overfitting and poor generalization to unseen data.</t>
          </li>
          <li>
            <t>Hard samples: Hard samples are data points where the model struggles, producing inaccurate, ambiguous, or low-confidence predictions. These samples are crucial for improving model robustness and generalization, as they expose weaknesses and encourage learning more discriminative features. Techniques like Online Hard Example Mining (OHEM), contrastive learning (focusing on hard negatives), and curriculum learning (gradually introducing harder samples) leverage hard samples to enhance model performance, prevent overfitting, and identify potential data issues such as labeling errors or biases.</t>
          </li>
          <li>
            <t>OOD samples: OOD samples refer to data points that significantly deviate from the training distribution, which should be detected and removed. Common detection methods include uncertainty estimation (e.g., Bayesian neural networks), density-based approaches (e.g., VAEs), distance-based metrics (e.g., Mahalanobis distance), outlier exposure, and energy-based models.</t>
          </li>
        </ul>
      </section>
    </section>
    <section anchor="use-cases">
      <name>Use Cases</name>
      <t>NDT can be applied to various types of networks, including data center networks, IP bearer networks, vehicular networks, wireless networks, optical networks, and IoT networks. This section highlights the significance of data generation and optimization in NDT by presenting several typical use cases.</t>
      <section anchor="configuration-evaluation-and-optimization-in-data-center-networks">
        <name>Configuration Evaluation and Optimization in Data Center Networks</name>
        <t>Data centers are essential for the growth of Internet services, consisting of numerous computing and storage nodes linked by a data center network (DCN), which serves as the communication backbone. The DCN faces challenges related to its large scale, diverse applications, high power density, and the need for reliability. NDT can evaluate configurations and technologies to reduce the risk of failures. For NDT to be effective, it must accurately model DCN traffic. A key challenge lies in generating realistic network traffic. By analyzing traffic patterns, data generation and optimization techniques can assist in creating simulated network data and optimize both real and simulated data. Numerous factors, such as the type of business, network size, volume of traffic, and load, influence traffic patterns in extensive DCNs. Moreover, these traffic patterns are dynamic and evolve over time. For instance, workloads that are sensitive to latency, like online transaction processing, tend to peak during the day, whereas workloads for online analytical processing are more prevalent at night.</t>
      </section>
      <section anchor="performance-prediction-in-ip-bearer-networks">
        <name>Performance Prediction in IP Bearer Networks</name>
        <t>Internet service providers encounter challenges in delivering high-bandwidth, low-latency, and reliable services, especially in large networks like metropolitan area networks (MANs) . The widely adopted IP protocol adheres to a best-effort principle, making predictable performance difficult and complicating the stability and availability of network services during failures. NDT can function as a high-fidelity simulation platform for predicting IP bearer network performance.  Accurate network status information is vital for optimizing protocols and identifying faults. Recent advancements in in-band network telemetry (INT) technology have allowed the integration of network performance data into packet headers on the data plane. Utilizing real performance data from INT, data generation and optimization techniques can create fine-grained simulated data, enhancing both real and simulated datasets for better model training outcomes.</t>
      </section>
      <section anchor="task-offloading-in-vehicular-networks">
        <name>Task Offloading in Vehicular Networks</name>
        <t>The rise of vehicular networks has facilitated various delay-sensitive applications, including autonomous driving and navigation. However, vehicles with limited resources struggle to meet the low/ultra-low latency requirements. To address this, computationally intensive tasks can be offloaded to resource-rich platforms like nearby vehicles, edge servers, and cloud servers. The dynamic nature of these networks, along with strict low-delay demands and large task data, presents significant offloading challenges. NDT is an emerging method that allows real-time monitoring of vehicular networks, aiding in effective offload decisions. Additionally, machine learning algorithms are increasingly utilized for task offloading to enhance accuracy and efficiency. Unlike traditional communication networks, vehicular networks are more dynamic and heterogeneous, leading to data shortages and quality issues. Data generation and optimization techniques can simulate data for adaptability and filter high-quality data from various sources, thereby improving model training effectiveness.</t>
      </section>
    </section>
    <section anchor="discussion">
      <name>Discussion</name>
      <t>Several topics related to data generation and optimization for NDT performance modeling require further discussion.</t>
      <ul spacing="normal">
        <li>
          <t>Data generation methods: 1) Generate configurations that cover enough scenarios and scale from small to large networks. 2) Choose data generators that consider accuracy, speed, fidelity, etc. 3) Use data augmentation technology to expand the training data by using a small amount of practical data to generate similar data through prior knowledge.</t>
        </li>
        <li>
          <t>Data optimization methods: 1) Select data from multi-source candidate data, including hard sample mining, OOD detection, etc. 2) Verify whether the data quality meets the requirements.</t>
        </li>
        <li>
          <t>Deployment: 1) Time/space complexity and explainability of the data generation and optimization methods. 2) Provide feedback for data collection to form a closed loop.</t>
        </li>
      </ul>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>TBD</t>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>This document has no IANA actions.</t>
    </section>
  </middle>
  <back>
    <references anchor="sec-informative-references">
      <name>Informative References</name>
      <reference anchor="I-D.irtf-nmrg-network-digital-twin-arch" target="https://datatracker.ietf.org/doc/draft-irtf-nmrg-network-digital-twin-arch/">
        <front>
          <title>Network Digital Twin: Concepts and Reference Architecture</title>
          <author>
            <organization/>
          </author>
          <date year="2025"/>
        </front>
      </reference>
      <reference anchor="Data-Centric-AI">
        <front>
          <title>Data-centric Artificial Intelligence: A Survey</title>
          <author>
            <organization>ACM Computing Surveys</organization>
          </author>
          <date year="2025"/>
        </front>
      </reference>
      <reference anchor="RouteNet">
        <front>
          <title>RouteNet-Fermi: Network Modeling With Graph Neural Networks</title>
          <author>
            <organization>IEEE/ACM Transactions on Networking</organization>
          </author>
          <date year="2023"/>
        </front>
      </reference>
      <reference anchor="MimicNet">
        <front>
          <title>MimicNet: Fast Performance Estimates for Data Center Networks with Machine Learning</title>
          <author>
            <organization>ACM SIGCOMM 2021 Conference</organization>
          </author>
          <date year="2021"/>
        </front>
      </reference>
      <reference anchor="m3">
        <front>
          <title>m3: Accurate Flow-Level Performance Estimation using Machine Learning</title>
          <author>
            <organization>ACM SIGCOMM 2024 Conference</organization>
          </author>
          <date year="2024"/>
        </front>
      </reference>
    </references>
    <?line 283?>

<section numbered="false" anchor="acknowledgments">
      <name>Acknowledgments</name>
      <t>TODO acknowledge.</t>
    </section>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA8VcWXMbyZF+x6+o0DwMGQIgSxq/wGGHKR4zjBV1kR6Fvd6H
QncBqFGjG9MHOZiR9rfvl5l1NdAgKe84LIdtoo+qrKw8vjyqJ5PJqLVtYWbq
yZlutfrelKbWra1Kpctcvd20dm1/lQuLqlZvTHtX1Z/UmV3aVhfq5s6WT0Z6
Pq/N7f9vjEy3ZlnV25my5aIajfIqK/UahOW1XrSTwk7Kdb2c5G05yTHLZBlm
mVTJDJM/fDdquvnaNg1+tdsNRrg8v7lQ6huli6YCkbbMzcbgf8r2yVg9Mblt
q9rqgn5cnrzC/4HIJ5cfbi6ejMpuPTf1bIQpzWyUVWVjyqZrZqqtOzPCkl+O
dG30zD9PC1vWVbfBFb/OK13qpVnTdKNPZotr+WykJqp093PHhxZ8oOu0PBWX
Fy6lyxyNdNeuKlA2GSn8W3RFIey6Mla9tnyxqpe6dC/M1OnKllpdVXNbGL6d
2RbcfmXsT7ZcypWqK1vaAn6WL5m1tsVMFXZt7Pan7V8zurPmQaZZtd6b/nRl
yqX6x6rqfm8SfsWYGY3+GDLOdLnVIITI+b0JISJyGf9BOt7bUn0cYsUPnb4z
Nh0WYxTTu+6vK74zONrfO0yL/6q/a0flQ4MSkVv31h//mI49Kqt6jfduIdcj
Urn4S6nLydnU1u1CdM4J6sQJ6oQEdaLrbDXjqbwBGdJrsK8qM7NpG7YFH8zC
1AYX1Alet63J2q42T2QYXS9NO1Ortt00s2fPSObbWmefTD21pl1MsdZnsArP
xCA8grxnPC6rrnrxhxd/pJWRhZqcQhdrm01OLvsr4JuZ3ASFrV3YDIZBXZat
KQq7JMpn6kRdd/Wt2QrZXg+VcvuBB06vsOz1pmshTO7hZoCWD1XXGnCtT4S/
Orkw9drOgrm8qnJT0IAfbbtS39d6s8K9rgZ97pHmIEWX5+fnz4ism1qXjc5I
XBoFe+ze9EIf6HtJ9F3B3GR79IWr6kI3rXpnapYd2tTzBhYKQzRs59kZEKtN
sPmNuiPirzTpjVGvja5LzH0vJ68vvz99e3VFVD0naXIS1Kf3OdG7ftmnFL/V
SZaBRa1RF0V1N3ltbk0xRDL5pq4h7v6rtH13kLbvRqPJZKL0vCFxbkejIUVR
R2/Obo5VBuWeG5BicqWhM6oxoN+w8mRV007MYgGlgZ4qU97auirJrzC7vTup
NuQ4qrpRbaXMrS46Wr6/i5ludW2rDlux0lCjhWog8XSpmUI/SfqL7fiQ9Kuj
k8tjtSZRbMbKNBtDjxRbjM+S6KbBvZUGjXNjAAI2m8JiPez7z27kbXB2qm5W
Rv3c6QLGV1ULlRuzUYXju5tEwY6VGF9cNsssZgBvMHMLP8xPKZ1YE+YVGY+p
wgQWI5h1hckzMJXfX1V3xBq73tQVaGxBhIziSVnU1Zovs9sFO3ky8HwqO7m2
eQ6/MfqGWFNXeZeJSz5LHLmytHvYoRaDgutNy+KGVWLE1baxGS4326Y1a3VE
LxxjRt3Sa0AZMBwds7XbkBjlojZE0s673zZEn5fmMXOrxXbyD2LEyugCr2L6
FlvOC2pXAChLeNP2wIiFXRiVbTP4tEGw5mSVV5iCl7ACFl/+bXg15peWVl4G
Y6N+++2RLubLlykLzX9YMYgEWi+JcqbnhaEx/POEMptkgZD+BlCF6LyztSlM
0yS3CMZlPVXhXcnEUMarpHY1ftLAkONy2aijywp8j4/Q4m/NymZdoeOrkNJH
ae9G4yl+dV9/scVvmmOnxawnJa0YO7CyyxWej+wG47xGYyfWm8L8EpiabIX4
BM8yx0j2Q2MMkRVdTgMkwuy3CI+MAbf1AgsCKYDrmVyrTVN1NR6EplSZe5BY
0pj61uJ6Y4rFhBSAjc3J5SSvLS3EUxfIXujMQvPZc7HMAtKzSwDr8SfEEfB3
DL1YrlrgJ/yvUy7hhfY+BlIC5u9YtZPL38OU0aB1VbAFYXlZQbpqZ67FzjI1
RBxUZL7lhSDqqJeyMYzC/NsR5FxCFXcwEancoOGkEWE14cNoHGyJZUMtwUkZ
DemeYe8/egegC7sB4ol0zYLELwsfsy0vWwKgwgeOem7DaOk8h+17bhq7LHej
KR47jaTwOlx73uy5pzHsgIUWk+Vx70Ok7BoKQxwO0Rub1Eo1VeGcCdQFBmJN
pG7I4bO282MNMESLWNBJaUELp12beF7xU+x+vKLIeyznMEN/Y4gy8ApoD0wm
glPn9tVsJc92kkFvt2uB7Scc31tRV+CXs5vZoGOA4bmcHYIOdPP70xnpocsQ
gIun5B1KYKKb6nqmbmBIiZRrUeDR6O3bs5l627WTajE5s8BPdt6Jr724vHgL
BGprQNDL0v2BJ0ej63czdU2iDGwKHtZY3mj08eL9TH00TjsutK3V+850hHtH
Zx8+IGA0RHJLmJziFER0mOTVBYh6BQuQt6uJTHFtSEFHo9NXHzi2Ib/eqlf0
JhZEnPtgfu4g3GzzxBgEO5NaNwjuLTMO9hMM9dKGy7a8JVkSD7rdpM4CJsTn
EqJ3Cs5mU+iyZMmlREVbd85QiuNzf0bJHztdcDcBlyL8a6o1hL2FUYZOsRPw
KC6aMwh9X188HT6CgWnxf375Mg7hBC77P798YRrWL3Ft/ZIMD4YXXEE2ydlr
WMNVWRXVcgsmVY3p8RH2/db2rSqrhNMPCPPZwwYPmmkXrRh/tiSihAMWOQDH
YM4cnIpogKi1P3cmbgw/p7sl7ZtjPfumptuQpDd8RVAA4K9soSg+24iwV4JH
wRRo81RdQOnNL5o87nh/CjYD7Miwd3lu6RoEyT2CjWz4TRiV65QU3HDb6xBX
TiEThEQwqAeNpXGAHptAALfQc3oHFxaYEYzCuGe9FQQ7S+OuTLFRljJwdrF1
lgfKxQ6fwJWnbTSCbmOKLd6CdR/3ra4zi9ggkAlJqRuxrLAg2Iwt7ZCTAgLG
jDuhp7klJNEVLTsJ74YIMxjaq76v8k4hSP0MIQDsSs+ss6xsQiAQQNRMWHDo
NsCsgHzWLzLqzABs5txxGdSDQ0Q+QUyRgWjNseoCeoS9nKqrqjaw9/WYVggF
Kqs2cT4wJEPzY/1ZVbAX4nXA8pkI0WBDFnbZiXFo/oRle5PvnCCQ9Gzgmjoy
0+V0rN5cT15KAvjqjbl5+rQf3GLuB5xq6n/3nCvD7cge2X3+s8cjzELIGk4X
LEAUAoGbqh+qO8OscvMST2zcDcJ6EzKg3ZpxIyTQ68+YSCBYVpvwOImTRP0N
Fgd4CIMVFyRqCZI2jxMZv+gVJalrMUgJVg5gKqopljgwDm2Xz8ODc1F8JXvE
wa0oNvsShqn7bpnihe9PjxMDPA5QKh3Vb/n37254sa9f66uTf99+iy30W/7A
JvcQkIM5KSYUsDy4Ktt4BXNmpSWUDRux1BuXyXrktkpUUBuYE2+r/xVIyoJH
ptfk/1l0imlaO3HhlwQoP+w9uTbGudQ6gUQ0Mds6vyl72ruwLSdOZec4f4Kb
XVCAPtlTymmKxeYd2af4a/AwhieGpwBDhIFB8UWt10ZyCQv1UK1LsN8ifeXB
jd/bcIggtuWuJBN+QdbYqOcBKnLUTCErosiGdo7qUjviTaM2YVfD3d6sfB8L
/F/8cxlO+ne2M8z+v7O9shQuPp0c/vf03rvJczTQ54EZ/b/P995NnpOB4qxP
HxrowKNuoM/Ji/sDJYt7eujRMJD3nEMDfY4Qg36c62Z7eKC22rBt7s3PBNB/
eLv9uA5XHRhomCefk6U9fdzS0lf9Mv1AvffSR58mlA8NFBBGj6LbfYo+J3/6
gSgOIX0aWtrn/oUfdJ2Hv8lQ9QYCqy1MB2b/SzLQ0wn95y/qFEpmKWNLPzyz
6W9nkD4/jtk7u5bedDcODBRd/uDS7udRetc7wf72D0j20K7duGTd0NJ6/z4r
BPjur/SyH2itEaH90ldgL9k7Az1WslOjVtUDAx3BT8FTHA8N9BVm5H4VSbTi
oYHut0d9lH7fQDvR2N5AB/99lan9fYw/O6LfZuqbT4agCtfz/vzkq7yvd6VP
gJmktAAFXJZ/fiJ59idf4M+/2RvlmjyhOO4DTtSumx6WzbzGywtHwxDs+CAG
o58BeO1jsoOA0tW2hKoFp6Y8TaHYsBPCeQTAaAgzMjZbA0ZQHNxTi/4SOSPe
WybVwYII9yeZHbjuM1mImBxuCVR6/zVWtbPSYmQFDfrMv9gCXrdpzO7onI+Q
AoXPBHB0iz8pqOcUQkyYUa6saho7pwrXjZt+d0hPJqc7uA1IgDHn1QiIUcGn
IvxFl02+5PwELiB8p5Cw+RYu61eHprOVyTtGdFQEbg2tFj/BJb3RGUNf02ZT
FRyV8zO5aakLoHSFiY0GwpUaIjVGtKqVtEaoWDpIjrVLiNdgMJfdI6HuM5Pu
Z7Wdu8H9TVYQjPqsoNSaA+Jus2RP0huLqmo3eLz1BT+isfGi7fd4F7HG6b5t
HNHjlNwxh+ESOctKmZd5kgMe76aL1dFNdX0sjIR89i29T8YkMu7C0/IrMhj4
8YxS7SD/8XHw9JBKUDQnqsjJ3gfUcE8Fvf3qmb1dC7YP9IMNc7eGFLxfg/B5
NrJhB8JCtginvTFmO7/j7h8IU2NA7iTp/jjm0aH3ZMf3zVTgTpKdxIsFFTNv
SQiXsRTYXwNYsLAFFWp9JtQAm1OdtM5FNwaqFeoIMOc4JlwZYbrMLRXxYCyY
t+4CV9Elp+kzlQsndILJKOHhSo4wp64kOlVnXe2VMiQxwRDsHLVq/AKL52+H
YdYVpln1yOHqHAwlpWEt50hdMEzZLV54KXltTy7tJF0V01wGpoQHuFJKc4QL
NdvvxuYGtFPS0+a9p3WeiznvrUWifUKMBwbC5vFQNISguJx2fy8zMRtIVpAD
YQmnrMW/PWmRZm9ggjCSXVjJRGGjTN2qT2V1V5BTAaRwVQQmSpdL6TXADmrM
/XNnOqMgZkWo6rBf6V1hnmISUh8yN6rCMja6zLbHnNfYgUFkPghbONWQeq1z
Fb66GszSMNZIa/r/mp/3srTv65me+deYzkEEA/Pp7bKHALE1y9FqnZBF09QF
JXpX3Zl68lrfscqfmWVtqBVhSRXH1Zp2mFLFG02otaVcLg3UQLggQ1XNnAGN
3F6AKYs8ndMZwNACEjDKP6oKpH/0erYosBu8RPNLJhK8MiIAc6z3zuaEF8R2
xAtEhLcoNMLY+W32s5YqWMRZGH+qi4CZ0SIn8GaqTqD69ZJ6VuJVcdNro8m9
+fssquLLBXVt6R6xz09a4DqvQ2AKD5hgpmEwFBACuEkWaBFAETU5ug7wpK6K
q3m15loPWX0qYHnYxBPYpGK3X1hWR1RyBrzYqS2ro+t3uDpYW1ZHHy/eH4t8
DxSY1dHZhw+kgYGTPUgY2NisgZt7fOSWIqjFkpwVgzsPdj365ZGKSudOExAe
VFUuD/lQw0HgCIzBLBHRIEWBFsHWcCekc1JK9Pa5oiq2Ny+J2C2CgAFu68Yp
jhRL/QSihx70vuN9Ho12QPAGyyY+ZEDe1FXAfTBgXzAjddVWWVWE6slDqPkR
cJnwDLZMw/tS40DgirT/8GiwQrSEoK3evjle8fia+oZcWxycMLkcuv3c2bc7
ro7wo2f2p08UG0QTsmfCOE4EjbKgnlWcDr3PDTFD7Qvq6NXF9bGAGYemaHeK
wJD+TFifdgunkpZdUu8xedYfITW6pN6lj2ZXvaSFQBh05HnhhYq9Xka1/FIw
CFgEN1lm5tgDAC/3nuOyR+uvZhyBvNqKz7x19IY6kGzwfYyFQFP5yrU3BSYI
C1wsGoalhRHpvHThR0ma6uTLy2t/ghCnFqTPtYufWMsT89mIpvjU2hV7ScH5
O0EdNZaYeuvco95pguw33+686jam4a5d0rjYdrKJuvX1MWPEAlHpdsK9NLLz
2+MGdxHd6KNJ1GGH8F2E0Eo/hnRqume5M9ZXDD3LGcwwtGNj6dr3Eq+dhJ/c
MxmDUcriSP+UNa4ZEwHoDl7oOXtaFENU2XkCIgwIYE23jWU54V4ABEiWOjpT
pnjSdE39H0V0d1ImNJogOXN2kjAWz8AdUZMc7XvAL1VpApPJi/YmCk7wXQUf
Spuz1+ekjk5ffXCO7e2byduLi4gjd2tgDyHJXhh6b/CZRF57Aeh+UWzFguwD
iyTU26y0axTBwwLwiZ9poMyPYEn/vgiWcN9DKb698hp2ZiOZO838MdK5F5fm
kkssj1HpWLwXlhSfwhX+6YGAyzv9kD7igAe7LWGjw0uhnLqwPkbyq3CZj5Vo
vxfgCGl3QD9fCUrsjxOAkuPYWRvByoP0PCbRx/bzmiTiWiTiOrDtHW039xo9
IDXjvuAlT3p47yhLcc6iKmBnJPdnNr5j4xbM7o+GKLcleOMH9BD9QSm9vjl/
p57PYiHc9WBRM0StQ36DWtkoUt27iSGOnk+fH3uXkPQ/nX3LJeg9wLyTFjnb
t3N7QrmdKm5Zo5s9iUxSon2Z8icJdPA1Pcr+FDjUl5ZDw3Eidqczpw7tKYDI
K5GhobmERS+O1cUh1p5z5OR6c2KUuDfQAyNEnzvcCu9zN6QLO/014UhDr80U
+5RJRBy6ESVz8JPldE6SYuUo7Gi3i/NY1v7yWP3NIa6D4nWuqnkrHIABZLEU
rhF18nBvgOjn+yI3NMrzY4o4Q7bBNycu+Dl3IoRxOcfEFz4z8t5lrAl3vhbQ
w3/TgRiYgIu43t20ybEfwj36Pj6a5FDck6xYNLF7+HV82LHbWYRN145Tfjur
x5LY6k8CvGIP5bkOmbYQZgwoKPAPhL8VOdCBv7fQQKjIP//7+mJ8/X58/fqf
/+PNxYuZOi26RlAbeTr5MTyDXtCtAcORjBEX/F8TF7hS1Pvq+vTkjZwVKlVX
pm2m7qBdOPAVGmejdOuureh0XuZa7Rox9AwuiRioNp+2blKHy7k5myeAghNE
3LNEfMyEZgfZXGjCFtcNwpGKe6oHyBjpgVkfzAa8ENGUCgH/+R0WCPnhyNBP
kpyDA9c4Ns/9HrykFHmRcQ8ihQVuRqlQCv9KMIcCk5ICjzm0mJTxJdlq/6bZ
fU+EXFrISB+ye6awTYyVB7BqFne3F1VGxXyx04Yshs9LwND743Rkf+qJRNiv
h7OgFJeGdGpMWRBnpQm6shRw+TNmMpqPx/YYWZt+Spn9tkffZ9fC0xcpTz/t
c57mZ5S9Q7uPV7mYemACpa4723I0sjtu3J+dHmln2WNb/XmXIXQAYxmuyxG/
jJL6JrkgoZKI40yd7yaTJdOc+RzH0XckSntP7TgWfmkboxd3YCTWKyLW4fw0
3Tzsd8SoRuuZGOjE+oJll+IdUpI5j0Q8WVZEmGjRVL0l131nCZ8lN7C4F8ey
SOzth530xDrirJAN6u9aUkdwqYcoHxLeDkiIQA6uPcQpxocFRMlcwoQ+8c+P
BbNeJvivV+tzsPWmj1mbvi8RBCnIsmfYMbGYChdWiQvfK4DV9CUBf67IYb6w
8/dUwyQ2jMWa6ddUhw4XgDzVjfG1LfdKzTZZ3rKtK81iu3vhZW+p3vr33Z2Y
cI8qD3W0ygEbOmzbcKrwPFnSrPdLKnmJzZK6QCjD0QnecKgRT1OXCkxM1RDx
4dgvfW7EHbIYu0bk0O0q2Y+cWtCn6uPKFsmSICU9bpNmw+FKPzYlhFOsSF0z
Go7OUuqBHWkNIjhwqQrmZSlCvNZb6YzuLcOfJeMsRpmTp3XHQWHl9Hpulx2f
K5CTHJuKGrXl8HimXVoCuu4LaVIuwJs7HboYHTCC/KnUcSZSTg28/2FXlu7h
PTdzLJesoRKBc89t6aur40g2fxSGPiEQmZ3um5fJXmXSpaspSolnhmTiuprD
iXBycL8Leez0ditVW0o76k+lE3XqNSmzqmOXmByW53oxZVbWnPq5NQmKvIlH
kgr7yai3ZUFeg1l1Lq5bXYnAHL394fzqmE+QQYgaHijMcsRno5xcsdKWZsmT
NQ4Dg281lay7dfLWstZ551p/2sBmep8KRsKxYzzv/HzPGlDkUK5iI3b/uDt2
4JbPL0TBETrC2aIgZj6kbNKDWaxRRIypazbgtZpbkk+Sq8R4zXpl51CxT0WL
MSudOGVnxadycz4zaaKDiZam18cimatmVXVFLlFNK+F2WsimL4qsKWnIN1Ow
4BuVutIdgoAOmvhZC5eZeaW3prG63D3pfUyJWE6LTiTVrTcQVeAcE9pZfjw5
58ccyHDPYXpsdXjoSq90octqTpjSPYmX4BgKa2qR5K52h5hI3Jd+wqTBniLM
U+I/HzL1OMQfPbz/yH3Md+XD5+nfYSzoZXpt//D8+HFH9mkRl9VNkqzrJTrD
UWyXhwxikZlHnQOQM6EEhCHhDQkwGWFWkCKcCaUCSCbCSjDhtFdVOI8Zg73W
SIzOqcWdb7PISckeaiabI9rjUy0It+6kmhaK0+64fSPHTm3DxNLWdGtTy0ky
/y0caaCrWM2ld45wn0B+PbRv6ujs9M1xUBE6pNd4VINh113pkeEcMfW8Kl0F
GW/RqX7yeCsqnnLXBPY1IoRGyiuqASvNOJ4f7H2YgN3rhivVTkli0BiOQGJY
63xf/FZFSMYM9WD641RWDBzcSJeJW6ptw+2tC20Lsd0X7kiIFBXDZxf4oN8a
PiRtBRILSWt3qVVKmXyCHwlMAL8lzvXSh02hDBntWhY7Nfzbr7ZSnPiVYzH/
DQZNiaOyecRBpuQsLIfzDYkHTS9fWJBM8VAmPRkJ0XMFieM83n4KDxz3Yob9
pngjBkxscl2P4Jz8FndCxX6/X8HF2wq+yrhzdVJt4nCk0jmZlEXRsavfXbvy
R80acpDgeJOewZS2lb13GI1sS0CsTKwgn/NWktKwayOb7b8VA6EHma6E71vC
GpZCmhPyQFwoM+4k/UTojZ16G7/w5BEye0U8mks3lwaPY6tYTjEZ4yL6XEmY
kQTbjcgSIAYwDsjUMOogHwwVghcGjSUZPbFH6VeW3gWkRIyDJX4lljianl1r
4lPkMEQMd9goJLpsyRMWpLThiwih9WDMQC1wR7xoIcXAaKySjxdhMDEG4Wgs
c5Q8HBUwELmXtF4d7x9dnbwBZhFjc0eJbGhKDomFWGJ5viMB14izjaTl5ka+
U0PZQ2qZzSznTNzRZIcmmcoUlccuQPnWzdoZKF9NbT3uli4vGI7kow5B2N2y
/c5H++IN1gLwQZSYcqnM0JigjwdlN/h/7gYkAfEAGAPuedd0DQjmw3e4AkXy
RaLw5Tk5hCufRGDhE+3v9Xf0oJ0sA3wJ360Ct29pOmnZo28wlSwU0a7RcQJs
6lYdXb65Sc+1um8cUKHG5C5mbs2yDv17Q1n5mFt0Cd0VIiuS2Cqp/tFHGcCA
v3Fqw5vb/WEYI4Korzeq7ow/JcInS1d/6NvIsYPQNPt9trTxxWTX6elKDSGW
7FpIn4cbNxqe6u1iQeZCoib1Y0BTUa9vxKmxhd1HW3yaNn59J4+fPaFc0SRa
u0PfC6IEcVmt+ZXa3nqMUepbSmFwHimcBObppcUVPPAne/1XhJoQDPa6QCEQ
zyBjtZ5Qdt/ZlF5vKDdf6TyvudNjZRkHEeDR0iclYY/zFGDyp5Deq4R7gkg8
HZOakI5XM2eJKNUEiORXMObjBgKHfB47K6ou95fELnlfU6bVlsakMJZ6ToQf
jbS1keWUPrPcQDzzpBOFiXcC5WBpk0Y8fj2cKwqWOn47q4yfI3LZaXFrBXeI
he8sQexK/iKroMghiK6tF7n4GSo3OcjO+IMamPgkfPai4C9A7NQbQkradeJT
mk1zH9TWZSIdwON1J4tLYtLeWWJDhtqSgEDfS944CE749EYfr94XhETnmsKF
FWXqK7IMnI+gNI6jpnd8W3YsFJU54J3uHeZ9yKp4yxA/g6Fzven5GpdMe+AD
Rk673KcT5tu9VEiwL2EvCadJP4ltso7bOEejax/9VBuKOhMs/+hT1XunusUc
syrD/9VcAM7DnNIDuDO4C7pn6vmx75jew/hSvmJYZ0pOjScdCXwaCHhJuCQt
bozlUvwxVVSNWFWU/NlrbJbRpes9aUgHnKHMeOzz4MYOV7bN9z4Pk/g+rs5u
htOh2DDJ+Ph2PL0mMDZwVG3nCw/SwKSTD+RsqGM2VhnCwYyhTx8wf6VB49Bn
B/otCL1+l5hAUmsrn9Oh/E1InTjegMc/SmkD+Jc3P3htL8/D3zBg0s2mqLb0
k2m9geV61gAHhKynVxOwtgBHd7609aDQOkYwle9cu8gCO0xxLku0RMvy/RaX
GGVgpiV7TDFMtWE1uqZPJdLcp05q/Ie8bl6d8VcsT96c7N+jfEZeZR1/IIT8
dFnJk+7zse5jmESPfDDM76x8+eq3mRyaM/mfnyx00Rg6cnnz9uwt3k9k4P8A
z2PkJYJcAAA=

-->

</rfc>
