Page tree
Skip to end of metadata
Go to start of metadata

This page is intended to help users of previous Pawsey supercomputing infrastructure (such as Magnus, Galaxy and Zeus) to transition to using the Setonix supercomputer.


This migration guide focuses on the changes to and additional features of the new system, and assumes the reader is familiar with working with supercomputers. The Supercomputing Documentation provides background context to important supercomputing concepts, which may be helpful if you are getting started with using supercomputers for the first time.

Throughout this guide links are provided to relevant documentation pages in the general Supercomputing Documentation and to the Setonix User Guide, which provides documentation specifically for the Setonix system.

The guide has been updated now that Magnus and Zeus have been decommissioned.

Starting with Setonix

Setonix is the new petascale supercomputer at the Pawsey Supercomputing Centre. It is arriving in two phases:

  • Setonix Phase 1: An initial 2.7 petaflop supercomputer based on AMD CPU nodes, available for merit projects in 2022.
  • Setonix Phase 2: The supercomputer is expanded to 50 petaflops with AMD CPU and GPU nodes, available to merit projects in 2023.

Setonix replaces Pawsey's previous generation of infrastructure, including the Magnus, Galaxy and Zeus supercomputers and associated filesystems. This migration guide outlines changes for researchers transitioning from these legacy systems to Setonix Phase 1.

Significant changes to the compute architecture include:

  • Moving from 24 core Intel nodes to 128 core AMD nodes ​(significantly more cores per node and in total)
  • Changing from 2.5 GB per core to 2 GB per core (slightly less memory per core)
  • Changing from 64 GB  to 256 GB (significantly more memory per node)

For more details refer to the System overview section of the Setonix User Guide.​

The Setonix operating system and environment will be a newer version of the Cray Linux Environment familiar to users of Magnus and Galaxy. It will also include scheduling features previously provided separately on Zeus. This will enable the creation of end-to-end workflows running on Setonix, as detailed in the following sections.

Supercomputing filesystems

There are several new filesystems that will be available with the Setonix supercomputer.

  • The previous 3 petabyte /scratch filesystem is replaced by a new 14 petabyte /scratch filesystem.
  • The previous /home filesystem is replaced by a new /home filesystem.
  • For software and job scripts, the previous /pawsey and /group filesystems are replaced by a new /software filesystem.
  • For project data, the previous /group filesystem is replaced by the Acacia object store.

These new filesystems will have the following limits:

FilesystemTimeCapacityFile Count
/homeDuration of project(s)1 GB per person10,000 files per person
/softwareDuration of project256 GB per project100,000 files per project
/scratch30 days per file1 PB per project1,000,000 files per project

For more information on Pawsey filesystems refer to the File Management page.

For information specific to Setonix refer to the Filesystems and data management section of the Setonix User Guide.​

Loading modules and using containers

The software environment on Setonix is provided by a module environment very similar to that of the previous supercomputing systems.

The module environment is provided by Lmod, which was used previously on Zeus and Topaz, rather than Environment Modules used on Magnus and Galaxy. The usage commands are extremely similar, with some minor differences in syntax and output formats.

Setonix has a newer version of the Cray Linux Environment that was present on Magnus and Galaxy, which used programming environment modules to select the compilation environment.

For containers, researchers can continue to use Singularity in a similar way to previous systems. Some system-wide installations (in particular, for bioinformatics) are now performed as container modules using SHPC: these softwares are installed as containers, but the user interface is the same as for compiled applications (load module, run executables).

Key changes to the software environment include:

  • Lmod is used to provide modules in place of environment modules.
  • Module versions should be specified when working with modules.
  • The PrgEnv-gnu programing environment is now the default.

Refer to the Software Stack pages for more detail on using modules and containers.

For information specific to Setonix refer to the Software Environment section of the Setonix User Guide.​

Installing and maintaining your software

The Setonix supercomputer has a different hardware architecture to previous supercomputing systems, and the compilers and libraries available may have changed or have newer versions. It is strongly recommended that project groups reinstall any necessary domain-specific software. This is also an opportunity for project groups to review the software in use and consider updating to recent versions, which typically contain newer features and improved performance.

Key changes to software installation and maintenance on Setonix include:

  • The new processor architecture has seen the Intel programming environment (PrgEnv-intel) replaced by an AMD programming environment (PrgEnv-aocc).
  • The GNU programming environment has newer versions of the gcc, g++ and gfortran compilers, and is the default environment on Setonix.
  • The Cray programming environment has newer versions of the Cray C/C++ and Fortran compilers.
  • The newer Cray C/C++ is now based on the Clang back end, and the command line options have changed accordingly.
  • Pawsey has adopted Spack for assisted software installation, which may also be useful for project groups to install their own software.
  • Pawsey has adopted SHPC to deploy some applications (particularly bioinformatics packages) as container modules, which may also be useful for some project groups.

Refer to How to Install Software and SHPC (Singularity Registry HPC) in the Supercomputing Documentation for more detail.

For information specific to Setonix refer to the Compiling section of the Setonix User Guide. 

Submitting and monitoring jobs

Setonix uses Slurm, which is the same job scheduling system used on the previous generation of supercomputing systems. Previously, several specific types of computational use cases for were supported on Zeus rather than the main petascale supercomputer, Magnus. Such use cases were often used for pre-processing and post-processing. These specialised use cases are now supported on Setonix alongside large scale computational workloads.

Key changes

  • Jobs may share nodes, allowing jobs to request a portion of the cores and memory available on the node. 
  • Jobs can still specify exclusive node access where necessary.​
  • A partition for longer running jobs is available.
  • Nodes with additional memory are available.
  • A partition for data transfer jobs is available.
  • Job dependencies can be used to combine data transfer and computational jobs to create automated workflows.

For more information refer to Job Scheduling in the Supercomputing documentation.

For information specific to Setonix refer to the Running Jobs section, and Example Slurm Batch Scripts for Setonix on CPU Compute Nodes page of the Setonix User guide.

Using data throughout the project lifecycle

When using Pawey's supercomputing infrastructure, there may be project data that is needed to be available for longer than the 30 day /scratch purge policy. For example, a reference set of data that is reused across many computational workflows.

On previous supercomputing systems, such as Magnus, Galaxy, and Zeus, the /group filesystem was used to provide this functionality.

For Setonix, this functionality is provided by the Acacia object storage system.

Key changes include:

  • Jobs should be submitted to the data mover nodes to stage existing project data from Acacia to /scratch if needed at the start of computational workflows.
  • Jobs should be submitted to the data mover nodes to store new project data from /scratch to Acacia if needed following computational jobs.
  • Job dependencies should be used to combine these data movement jobs with computational jobs to create end-to-end automated workflows.

For more information on using Acacia, refer to the Acacia Early Adopters - User Guide in the Data documentation.

For more information on job dependencies, refer to Example Workflows in the Supercomputing documentation.

Planning your migration

Consider the following steps when planning the migration of your computational workflow:

  1. Log in to Setonix for the first time.
  2. Transfer data on to the new filesystems:
    • Working data should be placed in the new /scratch filesystem.
    • Project data should be placed in the Acacia object store.
  3. Familiarise yourself with the available modules and versions available on Setonix.
  4. Install any additional required domain-specific software for yourself or your project group using the /software filesystem.
  5. Prepare job scripts for each step of computational workflows by keeping templates in the /software filesystem, including:
    1. Staging of data from the Acacia object storage or external repositories using the data mover nodes
    2. Pre-processing or initial computational jobs using appropriate partitions
    3. Computationally significant jobs using appropriate partitions
    4. Post-processing or visualisation jobs using appropriate partitions
    5. Transfer of data products from /scratch to the Acacia object store or other external data repositories
  6. Submit workflows to the scheduler, either manually using scripts or through workflow managers.

Migration Training

There is also a series of six migration training sessions to provide assistance in migrating:

  • Module 1 - Getting Started with Setonix
  • Module 2 - Supercomputing Filesystems
  • Module 3 - Using Modules and Containers
  • Module 4 - Installing and Maintaining Software
  • Module 5 - Submitting and Monitoring Jobs
  • Module 6 - Using Data throughout the Project Lifecycle

Registration details for these modules will be available on the Pawsey events page.

Video recordings are also be available in a playlist on the Pawsey Youtube channel.

Related pages

  • No labels