
百亿亿级
百亿亿级意味着巨大的计算能力,不妨想象一个每秒能够进行百亿亿次计算的系统,这就是百亿亿级计算的力量。 Altair 处于百亿亿级变革前沿,正在探索新水平的高性能计算 (HPC)。 在这个新的百亿亿级时代,我们在努力开发专门的开拓性工具,这些工具可扩展,可在下一代系统上运行,并且符合机器学习、深度学习和多物理场领域中日益复杂的 HPC 要求。
HPC 的性能取决于执行计算的软件
强大的计算硬件需要搭配强大的软件工具,就百亿亿级计算技术而言,高效管理工作负载比以往任何时候都更重要。 Altair® PBS Professional® 通过自动化作业调度、管理、监控和报告进行全面协调。
了解有关 Altair PBS Professional 的更多信息
了解更多
探索百亿亿级计算
我们已与阿贡国家实验室建立合作,为 HPE 设计的 Aurora 系统提供百亿亿级计算研究支持。 而这仅仅只是开始。 随着百亿亿级时代的到来,我们一直在努力开发专门的开拓性工具,这些工具可扩展,可在下一代系统上运行,并且符合人工智能 (AI)、机器学习、深度学习和多物理场等领域中日益复杂的 HPC 要求。 超级计算机对未来非常重要,作为美国能源部百亿亿级计算项目 (ECP) 工业委员会的成员,我们将帮助美国打造百亿亿级计算生态系统。 我们很荣幸能够合作,并帮助更好地仿真精准医疗、增材制造、生物燃料等背后的流程。

优化百亿亿级计算
百亿亿级计算需要速度和规模来处理数十亿次操作。 这需要具备出色的应变能力及连续故障恢复能力。 运行百亿亿级系统需要大量电力,因此电源管理和良好的用户体验至关重要。 我们已经针对百亿亿级及以上考虑了所有这些以及更多内容,包括:
- Altair® Accelerator™ Plus,将吞吐量提高 6-10 倍
- PMIx,快速启动大量 MPI 作业
- 高可用的 PBS Professional 服务器
- Green Provisioning™
- 混合云功能、ARM64、GPU 和 SSD
- 具有双重许可模式、带广泛插件框架的 PBS Professional
- Altair® Control™,用于监控、分析和仿真
- 注重安全性

运行大量计算负载
Altair® PBS Professional® 可以运行大量计算负载,每个集群可包含 50,000 个节点,每个队列可容纳 10,000,000 个作业,还支持 1,000 个并发活动用户。 而且它的运行速度也很快,端到端吞吐量是每小时 10,000,000 个作业,而对于一个 4,000+ 节点的作业,其端到端运行时间是 10 秒。 世界上许多大的计算系统都使用 PBS Professional 来支持高效的工作负载管理,当今的百亿亿级系统将从制造业到个性化医疗等行业的未来产生广泛影响,使用户保持在创新前沿。 加入我们,走在 HPC 的前沿。
特色资源

Tracking Virus Variants with AI – Argonne National Laboratory Researchers Win Gordon Bell Special Prize
The COVID-19 pandemic has impacted the entire planet, and researchers continue to investigate its catalyst: the SARS-CoV-2 virus and its variants. Discovering variants of concern (VOCs) quickly can save lives by giving scientists time to develop effective vaccines and treatments — but existing variant-tracking methods can be slow. A team of researchers at Argonne National Laboratory, along with university and industry collaborators, tackled the problem of tracking virus variants by using artificial intelligence (AI). The powerful Polaris supercomputer at the Argonne Leadership Computing Facility (ALCF), which is enabling science in the runup to the Aurora exascale system, enabled the research with help from Cerebras' AI-hardware accelerator and NVIDIA's GPU-accelerated Selene system. Polaris is equipped with GPUs and with workload orchestration by Altair® PBS Professional®. The project team won the ACM's prestigious 2022 Gordon Bell Special Prize for High Performance Computing-Based COVID-19 Research. The results the Argonne researchers and their collaborators have achieved paves the way for faster, more detailed insight into the virus mutation process, enabling scientists to act on emergent variants and develop ammunition to reduce severity and slow the spread, ultimately saving lives.

Simulating Supernovas in 3D - University Researchers Advance Space Science with Argonne HPC Resources
Everything in our world and beyond is made from a common set of materials — elements — that combine to become the diverse collection of matter all around us. When a star dies, going supernova in a spectacular explosion, it releases massive quantities of these elements. But how and why stars go supernova remains a mystery, and researchers from Princeton University and the University of California, Berkeley are using supercomputers at the Argonne Leadership Computing Facility (ALCF), including the powerful Polaris supercomputer, to enable 3D supernova simulation. Polaris is boosted by GPUs and equipped with workload orchestration by Altair® PBS Professional®, which automates job scheduling, management, monitoring, and reporting. Efficient workload management is critical for large, complex workloads like these. Enabled by powerful HPC, the researchers have created "the largest collection of sophisticated 3D supernova simulations ever performed."

Outstanding Scalability at NIAR - Advanced crash analysis solution proves twice as fast as leading competitor
To support its goal of accelerating the development cycle, early in 2020 NIAR commissioned a study to assess the scalability of Altair Radioss™, Altair’s structural analysis solver for highly non-linear problems under dynamic loadings. Regular support from an Altair engineer ensured swift familiarization with Radioss. The study was performed on Oracle Cloud Infrastructure (OCI). OCI with its bare metal HPC shapes that use low latency RDMA interconnect provided highly scalable infrastructure-as-a-service (IAAS) for Radioss.
