Technology Evolution

In HiDALGO, we advance HPC, HPDA and AI technologies in order to improve data-centric computation in general. Although we mainly concentrate on three specific pilot applications, the technology baseline we develop in this project can also be extended to other application domains as well. The technology evolution in HiDALGO integrates our scientific objectives into a platform, which documents the success of the individual project developements and generates the required impact to establish a sustainable Centre of Excellence. The aspects of the technology evolution we pursue in this project can be divided into the following parts:

  • Seamless integration of HPC and HPDA technology: On one hand we perform software integration and on the other hand we involve the vendors in a well-defined co-design methodology to overcome the state-of-the-art bottlenecks for future systems. Of particular interest are the methodologies for live data analytics and visualization, which will be developed during the lifetime of the project.

  • Increase application scalability by optimising or porting the involved kernels: For HPC and HPDA applications, performance improvements are of utmost importance in HiDALGO. The project follows a strong benchmarking policy, which is used to assess the performance of applications. Furthermore, benchmarking also helps to strengthten the co-design activities. Continuous benchmarkings of application kernels will be performed at different design levels and implementations, in order to understand the performace of the applications and their potentials. This approach is supported by the access to new innovative architectures, which is used to understand the optimisation and porting approaches in more detail.

  • Developement of the intelligent HiDALGO plattform: Intelligent workload management is a major asset of HiDALGO. The AI assisted workflows will be implemented so that we have an automated application evaluation and validation. One further important aspect of intelligent application lifecycle handling is its ability for pre- and postprocessing. Hence, the HiDALGO workflow management is enabled between the application developers. They then define the prerequisits for the execution, the data dependencies and the required application kernels, which is compiled by the framework and executed on the fly. These intelligent systems are able to handle data cleaning and preparation as well as data analytics, which are very important for the influx data integration and also for other HPC domains.

  • Improve data management and analytics capabilities for HPC and HPDA environments: Here we focus on the improvement of data management and analytics in general. HPDA systems are usually based on Cloud infrastructure, which typically utilize local storage and do not possess efficient interconnects such as Infiniband. We develop methods to increase the performance of analytics, making use of efficient algorithms and their corresponding implementations. Summarizing, here we address the weaknesses of the parallelization of modern data analytics frameworks by applying message-passing oriented functionality for in-situ data processing.