TRAINING

Frontera leverages TACC and its partners’ innovative approach to education, outreach, and training to encourage, educate, and develop the next generation of leadership-class computational science researchers.

For upcoming training opportunities, please visit the TACC Training page.


Annual Frontera User Meeting

The Frontera User Meeting is held in person at the Texas Advanced Computing Center in Austin, Texas. The meeting features sessions and lightning talks from Frontera users; roadmap sessions from project staff about Frontera and the Leadership-Class Computing Facilities (LCCF) project; and feedback panels for users to provide input to the project.

The Frontera User Meeting is a valuable opportunity to participate in the community of scientists, engineers, and technologists who use and operate this unique national resource.

Learn More


Archived Training

2021

2/25 MPL Object-Oriented Interface to MPI MPL is a message passing library written in C++11 based on the Message Passing Interface (MPI) standard. MPL's focus lies on the MPI core message passing functions, ease of use, type safety, and elegance.
2/18 C++ for C Programmers C++ can be considered as a superset of C, but this view often leads to un-idiomatic C++ programming. In this short course, we both show the new mechanisms of C++, and we explain what C mechanisms, while available, should no longer be used.
2/10 Introduction to Linux This tutorial will cover the fundamentals of using Linux and Linux-like systems on HPC systems. Topics include an introduction to the shell and basic commands related to file and process management.
2/9 Introduction to Deep Learning at TACC This tutorial is an introduction to deep learning concepts, tools, and methods; and how they can be used on TACC resources.
2/2 Introduction to Machine Learning at TACC This tutorial is an introduction to machine learning concepts, tools, and methods; and how they can be used on TACC resources.

2020

10/29 High Performance Computing on Frontera Day 3 On the last day of this 3-day training event, we intend to cover Parallel I/O, HPC tools, debugging, profiling, and best practices on TACC machines, i.e. Frontera and Stampede2 time permitting.
10/22 High Performance Computing on Frontera Day 2 On the second day of this 3-day training event, the focus shifts to MPI, again covering an introduction and some intermediate topics. We also discuss hybrid computing and the interplay between OpenMP and MPI within a single code.
10/15 High Performance Computing on Frontera Day 1 During the first day of this 3-day training event, we focus on parallel programming with OpenMP. We provide an introduction to OpenMP in the morning and delve into advanced topics in the afternoon.
2/27 High Performance Computing on Frontera Day 3 During the third day of this 3-day training event, we will cover advanced topics not previously covered in the first two days. Additional details on specific content will be posted once it becomes available. Participants will need to bring their laptops to participate in the hands-on session.
2/20 High Performance Computing on Frontera Day 2 On the second day of this 3-day training event, the focus shifts to MPI, again covering an introduction and some advanced topics. We also discuss hybrid computing and the interplay between OpenMP and MPI within a single code. The day ends with a session on parallel I/O.
2/13 High Performance Computing on Frontera Day 1 During the first day of this 3-day training event, we focus on parallel programming with OpenMP. We provide an introduction to OpenMP in the morning and delve into advanced topics in the afternoon. Participants will need to bring their laptops to participate in the hands-on session.
2/6 Introduction to PETSc This course will discuss the basic PETSc objects and how they make up a PETSc code. Upon completion of this course, you should be able to independently develop scalable scientific simulation codes with the PETSc library.

2019

11/17 Tools and Best Practices for Distributed Deep Learning on Supercomputers This tutorial is a practical guide on how to run distributed deep learning over multiple compute nodes effectively. Deep Learning (DL) has emerged as an effective analysis method and has adapted quickly across many scientific domains in recent years. Domain scientists are embracing DL as both a standalone data science method and an effective approach to reducing dimensionality in the traditional simulation.
11/7 High Performance Computing on Frontera Day 2 On the second day of this 2-day training event, the focus shifts to MPI, again covering an introduction and some advanced topics. We also discuss hybrid computing and the interplay between OpenMP and MPI within a single code. The day ends with a session on parallel I/O.
10/31 High Performance Computing on Frontera Day 1 During the first day of this 2-day training event, we focus on parallel programming with OpenMP. We provide an introduction to OpenMP in the morning and delve into advanced topics in the afternoon. Participants will need to bring their laptops to participate in the hands-on session.
9/13 C++ for C Programmers C++ can be considered as a superset of C, but this view often leads to un-idiomatic C++ programming. In this short course we both show the new mechanisms of C++, and we explain what C mechanisms, while available, should no longer be used.
8/27 Getting Ready for Frontera We invite you to join us at TACC for a full day of presentations and discussions focused on Frontera’s outstanding capabilities. We will introduce the system and its new and innovative components, and will discuss how to optimally compile, launch, and execute scientific applications at large scale.
7/8 TACC INSTITUTE - Designing and Administering Large-scale Systems Spend a week learning from TACC's expert and experienced systems administrators about the tools, techniques, and practices that are used at TACC to build out some of the largest and highest performing clusters in the world.
3/28 Welcome to Frontera A brief overview of system plans and the broader project that surrounds it, the architectural design choices, and a discussion of application community that will run on it.