IntelĀ® Machine Learning Scaling Library  2018
A library providing an efficient implementation of communication patterns used in deep learning.
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Macros
Public Member Functions | List of all members
MLSL::ParameterSet Class Reference

A wrapper class for operation parameters. More...

#include <mlsl.hpp>

Public Member Functions

size_t GetGlobalKernelCount ()
 
size_t GetGlobalKernelOffset ()
 
size_t GetLocalKernelCount ()
 
size_t GetOwnedKernelCount ()
 
size_t GetOwnedKernelOffset ()
 
DataType GetDataType ()
 
size_t GetKernelSize ()
 
bool IsDistributedUpdate ()
 
void StartGradientComm (void *buf)
 
void StartIncrementComm (void *buf)
 
void * WaitGradientComm ()
 
void * TestGradientComm (bool *isCompleted)
 
void * WaitIncrementComm ()
 

Detailed Description

A wrapper class for operation parameters.

Holds information about the shape of learnable parameters and allows performing associated communications.
Weights and biases should have separate instances.

Member Function Documentation

DataType MLSL::ParameterSet::GetDataType ( )
Returns
The data type of the kernel elements.
size_t MLSL::ParameterSet::GetGlobalKernelCount ( )
Returns
The global count of kernels.
size_t MLSL::ParameterSet::GetGlobalKernelOffset ( )
Returns
The offset of the local portion of kernels in the global count of kernels.
size_t MLSL::ParameterSet::GetKernelSize ( )
Returns
The size of a kernel in DataType elements.
size_t MLSL::ParameterSet::GetLocalKernelCount ( )
Returns
The local count of kernels (the length of the local portion being processed by this process).
size_t MLSL::ParameterSet::GetOwnedKernelCount ( )
Returns
The count of kernels on which this process performs synchronous Stochastic Gradient Descent (SGD).
Differs from local kernel count only when distributedUpdate = true.
size_t MLSL::ParameterSet::GetOwnedKernelOffset ( )
Returns
The offset of the owned portion of kernels in the local count of kernels.
bool MLSL::ParameterSet::IsDistributedUpdate ( )
Returns
True if the exchange for the current parameter set is split into 2 communications (ReduceScatter for gradients + AllGather for increments) instead of 1 communication (AllReduce for gradients), false otherwise.
void MLSL::ParameterSet::StartGradientComm ( void *  buf)

Starts the non-blocking exchange of the gradient with respect to parameters.

Parameters
bufthe buffer containing the gradient
void MLSL::ParameterSet::StartIncrementComm ( void *  buf)

Starts the non-blocking exchange of parameters increment. Applicable only when distributedUpdate = true.

Parameters
bufthe buffer containing the increment
void* MLSL::ParameterSet::TestGradientComm ( bool *  isCompleted)

Tests for completion of the exchange of gradients with respect to parameters.

Parameters
isCompletedthe completion status of the request, true if request is completed, false otherwise
Returns
A pointer to the buffer containing the aggregated gradients with respect to parameters if request is completed, NULL otherwise
void* MLSL::ParameterSet::WaitGradientComm ( )

Waits for completion of the exchange of gradients with respect to parameters.

Returns
A pointer to the buffer containing the aggregated gradients with respect to parameters.
void* MLSL::ParameterSet::WaitIncrementComm ( )

Waits for completion of the exchange of parameters increment.

Returns
A pointer to the buffer containing the increment obtained with the synchronous SGD.

The documentation for this class was generated from the following file: