Optimum documentation
DistributedRunner
You are viewing v1.6.0 version.
A newer version
v1.27.0 is available.
DistributedRunner
class optimum.habana.distributed.DistributedRunner
< source >( command_list = [] world_size = 1 use_mpi = False use_deepspeed = False use_env = False map_by = 'socket' multi_hls = False )
Set up training hardware configurations and run distributed training commands.
Multi-node configuration setup for mpirun.
Single-card setup.
Single-node multi-card configuration setup.
Single-node multi-card configuration setup for DeepSpeed.
Single-node multi-card configuration setup for mpirun.