Linux and UNIX Man Pages

Linux & Unix Commands - Search Man Pages

mpsnndefaultpadding(3) [mojave man page]

MPSNNDefaultPadding(3)					 MetalPerformanceShaders.framework				    MPSNNDefaultPadding(3)

NAME
MPSNNDefaultPadding SYNOPSIS
#import <MPSNeuralNetworkTypes.h> Inherits NSObject, and <MPSNNPadding>. Instance Methods (NSString *__nonnull) - label Class Methods (instancetype __nonnull) + paddingWithMethod: (instancetype __nonnull) + paddingForTensorflowAveragePooling (instancetype __nonnull) + paddingForTensorflowAveragePoolingValidOnly Method Documentation - (NSString * __nonnull) label Human readable description of what the padding policy does + (instancetype __nonnull) paddingForTensorflowAveragePooling A padding policy that attempts to reproduce TensorFlow behavior for average pooling Most TensorFlow padding is covered by the standard MPSNNPaddingMethod encodings. You can use +paddingWithMethod to get quick access to MPSNNPadding objects, when default filter behavior isn't enough. (It often is.) However, the edging for max pooling in TensorFlow is a bit unusual. This padding method attempts to reproduce TensorFlow padding for average pooling. In addition to setting MPSNNPaddingMethodSizeSame | MPSNNPaddingMethodAlignCentered | MPSNNPaddingMethodAddRemainderToBottomRight, it also configures the filter to run with MPSImageEdgeModeClamp, which (as a special case for average pooling only), normalizes the sum of contributing samples to the area of valid contributing pixels only. // Sample implementation for the tensorflowPoolingPaddingPolicy returned -(MPSNNPaddingMethod) paddingMethod{ return MPSNNPaddingMethodCustom | MPSNNPaddingMethodSizeSame; } -(MPSImageDescriptor * __nonnull) destinationImageDescriptorForSourceImages: (NSArray <MPSImage *> *__nonnull) sourceImages sourceStates: (NSArray <MPSState *> * __nullable) sourceStates forKernel: (MPSKernel * __nonnull) kernel suggestedDescriptor: (MPSImageDescriptor * __nonnull) inDescriptor { ((MPSCNNKernel *)kernel).edgeMode = MPSImageEdgeModeClamp; return inDescriptor; } + (instancetype __nonnull) paddingForTensorflowAveragePoolingValidOnly Typical pooling padding policy for valid only mode + (instancetype __nonnull) paddingWithMethod: (MPSNNPaddingMethod) method Fetch a well known object that implements a non-custom padding method For custom padding methods, you will need to implement an object that conforms to the full MPSNNPadding protocol, including NSSecureCoding. Parameters: method A MPSNNPaddingMethod Returns: An object that implements <MPSNNPadding> for use with MPSNNGraphNodes. Author Generated automatically by Doxygen for MetalPerformanceShaders.framework from the source code. Version MetalPerformanceShaders-100 Thu Feb 8 2018 MPSNNDefaultPadding(3)

Check Out this Related Man Page

MPSCNNBatchNormalizationStatistics(3)			 MetalPerformanceShaders.framework		     MPSCNNBatchNormalizationStatistics(3)

NAME
MPSCNNBatchNormalizationStatistics SYNOPSIS
#import <MPSCNNBatchNormalization.h> Inherits MPSCNNKernel. Instance Methods (nonnull instancetype) - initWithDevice: (nullable instancetype) - initWithCoder:device: (void) - encodeBatchToCommandBuffer:sourceImages:batchNormalizationState: (void) - encodeBatchToCommandBuffer:sourceImages:destinationImages: (void) - encodeToCommandBuffer:sourceImage:destinationImage: (MPSImage *__nonnull) - encodeToCommandBuffer:sourceImage: (MPSImageBatch *__nonnull) - encodeBatchToCommandBuffer:sourceImages: Additional Inherited Members Detailed Description This depends on Metal.framework MPSCNNBatchNormalizationStatistics updates a MPSCNNBatchNormalizationState with the batch statistics necessary to perform a batch normalization. MPSCNNBatchNormalizationStatistics may be executed multiple times with multiple images to accumulate all the statistics necessary to perform a batch normalization as described in https://arxiv.org/pdf/1502.03167v3.pdf. Method Documentation - (MPSImageBatch * __nonnull) encodeBatchToCommandBuffer: (nonnull id< MTLCommandBuffer >) commandBuffer(MPSImageBatch *__nonnull) sourceImages Encode a MPSCNNKernel into a command Buffer. Create a texture to hold the result and return it. In the first iteration on this method, encodeToCommandBuffer:sourceImage:destinationImage: some work was left for the developer to do in the form of correctly setting the offset property and sizing the result buffer. With the introduction of the padding policy (see padding property) the filter can do this work itself. If you would like to have some input into what sort of MPSImage (e.g. temporary vs. regular) or what size it is or where it is allocated, you may set the destinationImageAllocator to allocate the image yourself. This method uses the MPSNNPadding padding property to figure out how to size the result image and to set the offset property. See discussion in MPSNeuralNetworkTypes.h. All images in a batch must have MPSImage.numberOfImages = 1. Parameters: commandBuffer The command buffer sourceImages A MPSImages to use as the source images for the filter. Returns: An array of MPSImages or MPSTemporaryImages allocated per the destinationImageAllocator containing the output of the graph. The offset property will be adjusted to reflect the offset used during the encode. The returned images will be automatically released when the command buffer completes. If you want to keep them around for longer, retain the images. Reimplemented from MPSCNNKernel. - (void) encodeBatchToCommandBuffer: (__nonnull id< MTLCommandBuffer >) commandBuffer(MPSImageBatch *__nonnull) sourceImages(MPSCNNBatchNormalizationState *__nonnull) batchNormalizationState Encode this operation to a command buffer. Parameters: commandBuffer The command buffer. sourceImages An MPSImageBatch containing the source images. batchNormalizationState A valid MPSCNNBatchNormalizationState object which will be updated with the image batch statistics. - (void) encodeBatchToCommandBuffer: (__nonnull id< MTLCommandBuffer >) commandBuffer(MPSImageBatch *__nonnull) sourceImages(MPSImageBatch *__nonnull) destinationImages - (MPSImage * __nonnull) encodeToCommandBuffer: (__nonnull id< MTLCommandBuffer >) commandBuffer(MPSImage *__nonnull) sourceImage - (void) encodeToCommandBuffer: (__nonnull id< MTLCommandBuffer >) commandBuffer(MPSImage *__nonnull) sourceImage(MPSImage *__nonnull) destinationImage - (nullable instancetype) initWithCoder: (NSCoder *__nonnull) aDecoder(nonnull id< MTLDevice >) device NSSecureCoding compatability While the standard NSSecureCoding/NSCoding method -initWithCoder: should work, since the file can't know which device your data is allocated on, we have to guess and may guess incorrectly. To avoid that problem, use initWithCoder:device instead. Parameters: aDecoder The NSCoder subclass with your serialized MPSKernel device The MTLDevice on which to make the MPSKernel Returns: A new MPSCNNBatchNormalizationStatistics object, or nil if failure. Reimplemented from MPSCNNKernel. - (nonnull instancetype) initWithDevice: (nonnull id< MTLDevice >) device Initialize this kernel on a device. Parameters: device The MTLDevice on which to initialize the kernel. Reimplemented from MPSCNNKernel. Author Generated automatically by Doxygen for MetalPerformanceShaders.framework from the source code. Version MetalPerformanceShaders-100 Thu Feb 8 2018 MPSCNNBatchNormalizationStatistics(3)
Man Page