DANCES project description

DANCES Project Description

Collaborative Research: CC-NIE Integration: Developing Applications with Networking Capabilities via End-to-End SDN (DANCES)

Project Goal

A team of XSEDE Service Providers is partnering with campus domain scientists and engineers in a project designed to improve the performance of science applications that rely on large bulk file transfers.  The project, Developing Applications with Networking Capabilities via End-to-End SDN (DANCES), will add end-to-end software-defined networking (SDN) OpenFlow network control to two types of strategic supercomputing infrastructure applications: resource management systems and wide area file systems.  DANCES SDN-enabled infrastructure will offer mechanisms for managing network bandwidth and scheduling cross-site file transfer as part of workflows, enhancing the stability and predictability of large data transfers.

The current collaborators are Pittsburgh Supercomputing Center (PSC), National Institute for Computational Sciences (NICS), Pennsylvania State University (Penn State), the National Center for Supercomputing Applications (NCSA), Texas Advanced Computing Center (TACC), Georgia Institute of Technology (Georgia Tech), the Extreme Science and Engineering Discovery Environment (XSEDE), and Internet2.

Background

Since 2011, XSEDE has been working to integrate NSF’s supercomputers, mass storage, data collections, and network resources into a seamless computing environment for scientific and engineering research.  An integral part of XSEDE’s mission is to enhance accessibility and facilitate the use of these resources by the scientific research community.

While XSEDE and its predecessor TeraGrid have been advancing the capabilities of NSF’s scientific supercomputing research cyberinfrastructure, the NSF-supported Global Environment for Network Innovation (GENI) has been leading advances in network configurability and control. Support of wide area software-defined networking (SDN) capable networks was pioneered by GENI with the 2010 installation of OpenFlow switches on National LambdaRail (NLR) and Internet2 backbone paths. In this early deployment, use of an OpenFlow path on either NLR or Internet2 required detailed planning and coordination between sites. Each participating campus needed to be added individually with manual VLAN provisioning hop-by-hop along the path from the research lab, through the campus, across a metropolitan area network, and finally connection to one of the OpenFlow backbones. While the procedure was sufficient for early experimentation by closely coordinated collaborators, access to OpenFlow control was not readily available on demand.

Since 2010, interest in SDN technology has grown rapidly. Many network hardware vendors are now incorporating and fully supporting SDN in their products. Internet2’s deployment of SDN as the basis for their latest research and education (R&E) network architecture, Advanced Layer2 Services (AL2S), demonstrates a high degree of commitment to, and acceptance of, the technology.

During the first quarter of 2013, the XSEDE network (XSEDEnet) migrated from dedicated optical waves onto Internet2’s AL2S infrastructure. A significant benefit of the XSEDE move to AL2S, and an enabling factor of the DANCES project, is that research scientists at AL2S-connected campuses now have a more straightforward and cost-effective way to access XSEDE services than with the previous dedicated wave infrastructure. The transition to AL2S also has the advantages of eliminating the 10GbE bottleneck of the original XSEDE backbone and offering multiple wide area 100GbE path options between sites, providing redundancy and flexibility.  Internet2’s 100GbE core infrastructure and aggressive upgrade policy make backbone bandwidth contention unlikely within the wide area AL2S infrastructure.

Though backbone congestion is now unlikely due to current trends of overprovisioning R&E wide area networks, end site infrastructure commonly remains 10GbE based and is still subject to contention for bandwidth between flows.  Network performance monitoring has shown that during peak usage periods, 10GbE XSEDE site and local campus connections do experience congestion, with a negative effect on overall data movement. This was demonstrated by the movement of nearly 1PB of data across XSEDEnet in May and June 2012 when a computing resource was decommissioned, requiring mass migration of user data.  With no bandwidth management or Quality of Service (QoS) capability, slowness or blocking of smaller competing flows and unpredictable transfer rates for the bulk transfer occurred. The DANCES network bandwidth scheduling and QoS capability will enable management of network resources to address poor network throughput caused by congestion problems.

Science Applications

Two strategic science applications, each having significant data movement components, have been identified as test cases to evaluate the performance of the SDN enhanced infrastructure.

Science Application – Galaxy

Life sciences are becoming extremely data intensive due to the proliferation of massively parallel DNA sequencing technologies. To cope with this explosion of data researchers at Penn State have developed and are supporting the Galaxy genomic analysis system.

The main public Galaxy analysis website currently serves more than 30,000 biomedical users performing hundreds of thousands of analysis jobs every month. Many academic and commercial institutions around the world operate private Galaxy instances. As a direct consequence of initial success, the project could no longer sustain the exponential growth of analysis load and associated biological data storage on their public servers. To meet these demands the Penn State Galaxy researchers initiated a large-scale collaborative effort with PSC aimed at establishing a national biomedical computational gateway.  The collaboration has since been extended to include resources at TACC.

Science Application – Turbulent Fluid Flow Research

Georgia Tech professor P. K. Yeung, whose research focuses on fundamental studies of turbulent fluid flow using large-scale computation, has hundreds of terabytes of data stored at multiple supercomputing centers including NICS, NCSA, TACC, and PSC. The cross-site computing capability required by his research would be significantly enhanced by a wide area file system with high performance, predictable data movement.

Enhanced Infrastructure Applications

The DANCES project team has identified two types of infrastructures applications for SDN enhancement. The first is file transfer integrated with resource management and scheduling,which is an important component of scientific computational infrastructure. Data movement in this scenario will become a scheduled event, often as part of a workflow. The second area to be addressed is data movement initiated in response to file access across a distributed wide area file system.

Infrastructure Application – Resource Management and Scheduling

An innovative activity of the DANCES project is the investigation and prototyping of the integration of SDN capability with the resource management and scheduling subsystems of the advanced computational cyberinfrastructure of PSC, NICS, NCSA and TACC.  The goal is to achieve higher levels of performance and predictability of data transfers for science applications and distributed research projects. The prototype and integration activity will include deployment of software and SDN OpenFlow enabled networking switch devices at the collaborating sites. Installing these devices at the endpoints in conjunction with Internet2 AL2S as the backbone service provider enables end-to-end network control as a foundation for this activity.

Along with end-to-end SDN OpenFlow capabilities, prototyping of the integration with the resource management and scheduling systems of resources, such as PSC’s and NICS’s supercomputing and mass storage systems, will be investigated. This activity will develop a data transfer queue specifically designed to indicate the need for high performance data transfer between resources of the two centers and integrate a data transfer “job” making use of this queue with the SDN OpenFlow capabilities to enable bandwidth reservation. The data transfer mechanisms supported by this integration are the SLASH2 wide area file system and GridFTP initiated via job scheduling.

The expected outcome of this activity is the ability of a user of the resources at PSC and NICS enhanced by this SDN capability to be able to add a high performance reserved network resource as part of the end user’s overall workflow. This would demonstrate a tight coupling and integration between distributed resources including a compute capability at NICS, a quality of service (QoS) or reserved bandwidth data transfer capability between NICS and PSC, and a compute capability at PSC.

For prototyping and testing cross-site scheduling, PSC and NICS are adapting their deployments of the Torque Resource Manager (a variant of PBS) to support cross-site, SDN enabled bandwidth reservation and data transfer scheduling. Cross-site resource scheduling support will eventually be fully supported by XSEDE sites after the Uniform Interface to Computing Resources (UNICORE) grid software which has recently been approved for addition to the XSEDE software stack is deployed. UNICORE interfaces to existing site resource managers and provides unified cross-site submission of compute jobs, staging of data transfers, support for various data transfer protocols, and workflow control. PSC, NICS, NCSA and TACC will investigate and prototype integration of SDN capabilities with the data staging and workflow capabilities of UNICORE.

Infrastructure Application – Wide Area File Systems

The second application category targeted for SDN integration is bandwidth reservation support directly within a wide area distributed file systems itself. DANCES is investigating, prototyping and implementing SDN-enabled capabilities for PSC’s SLASH2 wide area file system. Three general integration activities are being explored: implementation of a SDN OpenFlow enabled default QoS and/or bandwidth reservation for the wide area file system protocol traffic; development of tools to poll and monitor wide area file system activity and initiate appropriate OpenFlow capabilities to add or subtract QoS and bandwidth resources; and development of user commands to precede movement of data to/from the wide area file systems to initiate OpenFlow capabilities to add or subtract QoS and bandwidth resources.

If DANCES can demonstrate quality of service and bandwidth reservation capabilities for SLASH2 between PSC and NICS this capability could be deployed across all the XSEDE Service Provider (SP) sites that implement SLASH2. Collaboration with the turbulent fluid flow researcher at Georgia Tech will be ascience use case to test the SDN-enabled wide area filesystem.

The Galaxy project is leveraging the existing 10GbE connection between the Galaxy main portal at Penn State and PSC, and adopting PSC’s SLASH2 distributed file system architecture for seamless integration with PSC compute and storage resources. The ongoing project collaboration has a demonstrated need for the network bandwidth allocation capability that an SDN-enabled SLASH2 file system and network connection will offer. Bandwidth reservation for priority applications using the 10GbE connection between PSC and Penn State has been done statically in the past and did not have the flexibility to provide real-time response to application needs.  Providing the major users of the connection such as the Galaxy project with access to SDN network bandwidth scheduling integrated into SLASH2 will enable efficient coordination of bandwidth requests and maximize network utilization.

SDN capability will also enhance SLASH2 operationally by enabling bandwidth scheduling to coincide with the file system replication service.

DANCES Network Environment

Along with 10GbE or 100GbE external site connections, the collaborating sites have deployed or are moving toward a Science DMZ architecture for their high performance computing resources. Each site includes a PERFormance Service Oriented Network monitoring ARchitecture (perfSONAR) network measurement platform for network performance testing.

SDN-enhanced Infrastructure for Network Bandwidth Scheduling

The basic set of network service requests supported by DANCES includes end-to-end (VLAN-to-VLAN) path creation and bandwidth request and reservation.  OpenFlow 1.3 capable switch hardware is being deployed at the collaborating sites.  For the initial phase of the project, a single OpenFlow controller based on the Ryu Framework is installed at the PSC to manage flow rules and provide the “Southbound API” interface to the OpenFlow switches.  Control plane communication is currently being done out-of-band via publicly routed Layer3 connections.

PSC hosts the DANCES service coordination software, CONGA, that provides bandwidth management and control for the DANCES services.  CONGA receives requests for bandwidth from the infrastructure applications, checks for authorization of the requesting user/project, tracks network resource allocation, and signals the OpenFlow controller when a change to the flows and/or bandwidth allocation is to be initiated.

Evaluation and metrics

The measurement of DANCES performance is important both for evaluation of the project’s success in attaining its goals and more broadly as an indicator of the validity of the assumptions and approach of bandwidth management underlying the project.  The criteria for determining success of DANCES are:

  • Verified end-to-end SDN operation as reported by the SDN infrastructure controllers
  • Requested bandwidth is reserved and used by the applications
  • Readily reproducible deployment of the SDN-enabled infrastructure at other sites
  • Documentation of the investigation and causes if any of the above criteria cannot be met

The metrics that have been identified as a basis for evaluating the success of the project are:

  • QoS slice compliance (OpenFlow v1.3 Section 5.7) and vendor based metric output
  • Bandwidth available
  • Bandwidth used by application (Efficiency)
  • Limiting factors such as maximum disk output for context
  • Resource allocation wait times
  • TCP/IP behavior caused by the SDN infrastructure that negatively affects network performance (e.g. high degree of packet loss/retransmission or systemic packet re-ordering)

Bandwidth usage will be determined from the device counters of the Layer-3/2 equipment in the path and what the application reports. Tcpdump/tcptrace will be used to look for networkabnormalities that may be introduced by the SDN infrastructure.

Web10G is expected to be incorporated with iperf and data transfer server software before the end of the DANCES project and will provide enhanced visibility into issues with OpenFlow slices or disk transfers.

Security Considerations

The implementation of DANCES requires addressing multiple aspects of data movement including security and resource allocation. Fundamentally, the security of the XSEDE environment must be maintained. XSEDE system-wide authentication and authorization is managed by X.509 Public Key Infrastructure (PKI) security credentials (certificates and private keys). End-to-end authentication, authorization, and error logging / notification are required to verify that users initiating resource requests have the right to do so. Denied requests will be logged for review and follow-up if necessary. Successful requests will be logged for gathering metrics to evaluate the usage volume, e.g., number of requests and amount of data transferred per unit time, bandwidth requested, and number of different paths.

Integration of DANCES capabilities into the XSEDE allocation infrastructure requires extensions to the XSEDE Resource Description Repository (RDR) and the Central Database (XDCDB). A description of network bandwidth as a service will be added to the RDR.  An expanded prototype XDCDB with entries for the DANCES development team has been made available to the DANCES infrastructure via a RESTful interface.

Interfacing OpenFlow commands between each section of the end-to-end path and carrying the credentials from local/metropolitan networks through Internet2 AL2S also needs to be supported.

Transition to Production Use and Ongoing Support

After the prototypes of DANCES capabilities are developed, tested and proven successful across the collaborating sites, request for consideration for production use will be pursued at the collaborator sites. Documentation and demonstration of usability, robustness and applicability for production use will be prepared and performed. Additional policy and operational issues to be addressed prior to adoption for production use by any of the collaborators are anticipated and are discussed below.

Policy Considerations

Once SDN-enabled infrastructure applications produced by DANCES are proven to improve data movement, policy considerations will need to be investigated. Policies regarding who and how many users will have access to the SDN-enabled capabilities at the sites, how much bandwidth to allocate and reserve for this capability, how to handle priority, congestion andcontention issues, and other policy issues will have to be investigated and discussed. A draft policy document will be developed and shared with the participants based on investigation ofthe policy considerations.

Operational Considerations

Once the technical capabilities are demonstrated and the major policy issues addressed, implementation into production will be pursued with site management and with XSEDE. Areview of the technical capabilities of the project, results of the prototyping and testing, and any remaining policy issues will result and a decision to run the capability in production will berequested. The sites can adopt the DANCES capabilities into production once all of their issues and concerns have been addressed.