Home > News > SDP eNews - August 2018

SDP eNews - August 2018

M20 - Submission, Review, OARs and Report

Since the last eNews submission in April 2018, the SDP Consortium successfully submitted a documentation pack of roughly 30 documents for its Pre-CDR, M20, milestone to the SKAO. The documentation pack included key Systems Engineering documents, Interface Control Documents (ICDs) and an updated snapshot of the latest SDP Architecture views since the M19 submission and review in November 2017.

In late June, a two-step review process was undertaken between SDP Consortium representatives and the SKAO M20 review panel. The first step was a documentation review process where the objective was to understand and assess the suitability and risks associated with the SDP design before entering the CDR review process. The second step was a face to face meeting which first analysed the suitability of SDP software architecture to meet the needs of its stakeholders, conducted using the SEI ATAM (Architecture Trade-Off Analysis Method) process and based on scenarios generated previously and then discussed observations made against the documentation pack in the first step.

The review process raised roughly 340 observations in the Observation Action Register (OAR) in the SKA Jira system, the status of which is shown in the figure below.


Figure 1: OAR Pre-CDR status as per SKA Jira

In mid July, the SKAO SDP M20 review panel released its cover letter and report which included the categorised recommendations for further analysis and implementation by the Consortium to be completed in time for SDP CDR submission in late October 2018.

The SDP commenced its next agile sprint in early July and the main areas of focus planned are:

  • Addressing the OARs raised during the M20 review process, with several workshops planned to work with SKAO on higher-level system/programmatic overlaps

  • Continuing to develop detailed Architectural views taking into account risks/observations raised during the M20 review

  • Documenting prototyping activities behind architectural choices  

  • Further progressing SDP/TM and SDP/SRC interface discussion and documentation.

VP – Vertical Prototyping activities for SDP

The SDP vertical prototyping team focuses on porting computationally intensive algorithms to many-core accelerators such as GPUs or FPGAs. The team uses the native languages of the accelerators such as CUDA or OpenCL to maximise performance of code on target hardware.

For a given computational algorithm the team investigates which many-core accelerator is most suited to executing it.

Much of the vertical prototyping effort is spent on studying "on-node" performance. Specifically, we aim to inform the estimation of SDP efficiency. We also try to quantify the limiting factors for any given algorithm or pipeline module. For example, is an algorithm compute or bandwidth bound? Can it be broken down to fit into fast caches?

The team also looks at hardware considerations for particular algorithms. For example, how much memory does an algorithm need? If running on an accelerator does it need a fast connection to the host (node).

The team also aims to quantify the Cost/Benefit of using native languages like CUDA compared to using pragma based approaches. For example, does the performance benefit of a code that is specialised and more difficult to understand outweigh having a cleaner more maintainable codebase?

The team has a strong focus on industrial engagement. Recent work with Intel has focused on producing a gridding algorithm written in OpenCL specifically for execution on FPGAs. This has been compared to previous work (with NVIDIA and NAG) that produced a gridding algorithm optimised for GPUs (Figure 2).

P100 Gridding time

Figure 2 - Three successive optimisations of a GPU gridding algorithm (magenta, orange and grey) for three simulated SKA datasets (30-56, 56-82, 82-70). We see how different optimisation approaches has a significant impact on the time taken for gridding the data.

Measurement Set v3 (MSv3) project

On the 9th November 2017 a Memorandum of Understanding was signed between the SKA Office and NRAO for the design and development of new data models to address the data processing requirements of next-generation telescopes.

After signing, the “Measurement Set v3” project was formed to investigate the requirements, current limitations and future options for a new version of the Measurement set.

This project is a collaboration between the CASA group (in particular for this project, developers from NRAO, NAOJ and ESO) and SKA SDP teams and is scheduled for completion in December 2018.

The key areas of focus for the project are the logical data model - what to store in the Measurement set, and the physical data model - how to store the visibilities. For the latter,  performance benchmarks will be established.

The logical data model is defined by the requirement that the measurement set shall contain the visibilities and enough metadata to reduce the data.

Figure 3 below shows the current logical data model and the tables that it contains for version 2 of the Measurement Set.

The proposed updates to the schema for the next version, v3, of the Measurement Set are:

  • Introduction of explicit keys to metadata

  • Data, weight and flag versioning

  • Standardisation of calibration tables

  • Formalise BEAM table with beam model information

  • Support for phased array interferometers

  • Minor updates for VLBI

The schema updates will be written up in a new casacore note which is currently in progress. The first visual representation will be made in draw.io and changes will be distributed among all Casacore users for feedback.

MSv2 Logical Data Model

Figure 3: Logical data model for MSv2

The Physical data model covers how the visibilities are stored. Currently this is done in Casacore which is a joint radio astronomy project for common software applications. Casacore is actively maintained by CASA group and ASTRON. About 20% of Casacore is the Tables system which handles storing the data. This is called the Casacore Tables Data System (CTDS) and is where the Measurement sets are stored.

There are a few current limitations with the CTDS those being:

  • It creates many small files, every subtable stores its own set files, which does not work well with Lustre file systems.

  • The number of rows in the Measurement Set is limited to 32 bits.

  • The locking mechanism does not go well with parallel access and currently there is no MPI support.

  • The read and write speeds of the CTDS were questionable which called for performance benchmarks to be taken.

These performance benchmarks were undertaken during the course of the MSv3 project to determine data access patterns and whether the CTDS should be replaced.

After a series of benchmarking tests the preliminary conclusion was that performance is similar to HDF5 and in some cases the CTDS was much faster so therefore it was concluded that there is no obvious reason, at this stage, to replace CTDS by any other technology.

With respect to the the links, learnings and relationship to the SDP from the MSv3 project, the MSv3 logical data model maps to the SDP visibility data model. Also, several other data models such as the calibration and beam data models are stored as part of the Measurement Set. The Storage System should translate well onto the SDP Architecture, in which it is essential to understand data access patterns to ensure efficiency in the so-called Storage Managers.

At the end of the project, currently scheduled for December 2018, the following deliverables will be delivered:

  • Updated logical data format -> new schema

  • Document with benchmarks, estimates of SKA/ngVLA data rates and identification of bottlenecks in CTDS.

  • A proposed list of changes to the CTDS to mitigate the biggest issues.

  • Detailed design of the science data model.

  • Prototype implementation of the newly defined interfaces accessing existing Measurement Sets.

  • Prototype implementation of storage managers providing a path to an HPC implementation.

  • Costed plan for software implementation of the new data model.


Related news

After many years of effort, on 31st October 2018, the SDP Consortium submitted its design documentation for Critical Design Review (CDR).


Since the last eNews submission the SDP has successfully submitted and certified its last milestone, M19, and now is busy preparing for the next milestone, M20, Pre-CDR. The M20 Release Readiness Notice (RRN) was submitted in early March and contains 19 documents within the submission pack. This list includes Systems Engineering documents, Interface documentation (ICDs) and an updated snapshot of the latest SDP Architecture views since the M19 submission.

This pre-CDR milestone is a stepping stone to a successful CDR delivery in October.  An incremental submission allows valuable key stakeholder feedback to be gained in particular on the architectural design to ensure continuous improvement and alignment of architectural priorities.

In addition to the finalisation of deliverables for Pre-CDR submission, other areas of focus for the Consortium are - further iterations of the SDP software architecture, advances in the SEI views of the high-level SDP architecture, continued understanding of the SDP interfaces, progress on the SDP functional model using the Algorithmic Reference Library (ARL), progress on the SDP prototyping testbed (P3) in Cambridge, progress on the SDP Integration Prototype (SIP) including end-to-end testing, consolidation of data models work, monitoring of the hardware costs evolution and updates to the corresponding predicted cost to name a few!

The sections below expand in further detail the recent work efforts and progress in the areas of platform and systems integration prototyping (P3, SIP) as well as the ARL. This work reduces risk and provides the rationale for design choices ahead of System CDR.