FullyconnectedOperator

public class FullyconnectedOperator: ComputableOperator

The regular fully connected operator. Operator do the 1D_dot(inputTensor.flatten, weights) + bias calculation on all input tensors.

Weight tensor layout

The weight is a 2D tensor with shape: [n, m] where :

• n: the flattened dimension of inputTensors, i.e. same value as count of a input tensor.
• m: the number of hidden nodes, same value as numUnits;

Thus, each column stores the weights of corresponding hidden unit.

Bias tensor layout

The bias is a 1D tensor with shape [m] where:

• m: the number of hidden nodes, same value as numUnits;

Thus, each value in the tensor is the bias value of corresponding hidden unit.

Input tensors auto flatten

For input tensor with rank >=2, the operator will automatically flatten the tensor and then do calcualtion.

Multiple input tensors

If inputTensors has more than 1 tensor object, the operator applies calculation on each input tensor independently and stores the results in corresponding tensor of outputTensors.

Note

All input tensor should have same count.

Bias enable choice

Bias could be disabled by setting the biasEnabled to false.

Batch calculation

This operator itself does not explicitly support batch calculation. But user can use slice tensor to do the same thing. Details can be found in Slice tensor and Batch calculation with operators

• Operator label. Conforms to ComputableOperator

Declaration

Swift

public var operatorLabel: String
• This operator does not operator on GPU. Conforms to ComputableOperator

Declaration

Swift

public var metalKernelFuncLabel:String = "Fullyconnected"
• Conforms to ComputableOperator

Declaration

Swift

public var computationDelegate: OperatorCalculationDelegate?
• Conforms to ComputableOperator

Declaration

Swift

public var inputTensors: [Tensor]?
• Conforms to ComputableOperator

Declaration

Swift

public var outputTensors: [Tensor]?
• If use bias. Default is true.

Declaration

Swift

public var biasEnabled: Bool = true
• Weight tensor.

Shape specific

The tensor should be with shape [inputDim, numUnits].

Note

If weight is nil when calculation, fataError will be raised.

Declaration

Swift

public var weight: Tensor?
• Bias tensor

Shape specific

The tensor should be with shape [numUnits].

Note

If bias is nil when calculation, fataError will be raised.

Declaration

Swift

public var bias: Tensor?
• Number of input units. Must be a positive integer.

Declaration

Swift

public var inputDim: Int = 1
• Number of hidden units. Must be a positive integer.

Declaration

Swift

public var numUnits: Int = 1
• If true, operator will not check the upGrads‘s shape. This is used inside framework to speed up in situation we know it will not be wrong. Cases like auto generated differentiation graph.

Declaration

Swift

public var disableUpGradShapeCheck: Bool = false
• If true, operator will not call inputOutputTensorsCheck() before doing calculation. This is used inside framework to speed up in situation we know it will not be wrong.

Declaration

Swift

public var disableInputOutputCheck: Bool = false
• Indicate if this operator would do paramter update.

Declaration

Swift

public var trainable: Bool = true
• The mapping type of this operator. OneToOne for this operator.

Declaration

Swift

public var mapType: OperatorMappingType
• fully connected operator cannot do in-place calculation

Declaration

Swift

public var inPlaceble: Bool = false
• If disable using MPS

Declaration

Swift

public var disabledMPS: Bool = false
• Designated init

Declaration

Swift

public init(inputDim: Int, numUnits: Int,
operatorLabel: String = "FullyconnectedOperator",
inputTensors: [Tensor]? = nil, outputTensors: [Tensor]? = nil,
computationDelegate: OperatorCalculationDelegate? = nil,
weight: Tensor? = nil, bias: Tensor? = nil)

Parameters

 inputDim inputDim numUnits numUnits operatorLabel operatorLabel inputTensors inputTensors outputTensors outputTensors computationDelegate computationDelegate weight weight bias bias
• The output shape of FullyConnectedOperator is decides by numUnits and inputDim.

Declaration

Swift

public func outputShape(shapeArray shapes: [TensorShape]) -> [TensorShape]?

Parameters

 shapes input shapes

Return Value

result shapes

• Check validation of inputTensors, outputTensors, weight and bias.

Declaration

Swift

public func inputOutputTensorsCheck() -> (check: Bool, msg: String)

Return Value

check, if pass. msg, error message.

• Compute synclly.

Declaration

Swift

public func compute(_ computationMode: OperatorComputationMode = SerranoEngine.configuredEngine.defaultComputationMode)

Parameters

 tensors input tensors computationMode cmputation mode. If choose GPU but haven’t configued a GPU SerranoEngine, operator will use CPU to compute.

Return Value

result tensors

• Compute asynclly

Declaration

Swift

public func computeAsync(_ computationMode: OperatorComputationMode = SerranoEngine.configuredEngine.defaultComputationMode)

Parameters

 tensors input tensors computationMode computation mode
• Calulate grads sync. All unary operator return grads tensor with same number and shape as attribute inputTensors.

Declaration

Swift

public func gradCompute(_ computationMode: OperatorComputationMode) -> [String: DataSymbolSupportedDataType]

Parameters

 computationMode computationMode

Swift

Parameters

 computationMode computationMode
• Update params if possible. No update parameters for binary operators.

Declaration

Swift

public func updateParams(grads: [Tensor], LR: Float)

Parameters

• Bind according to labels.

-Note: if cannot bind all needed parameters. fatalError will be raised.

Declaration

Swift

public func bindParamSymbols(_ symbols: [GraphSymbol])
• Attribute weight as a TensorSymbol. Attribute bias as a TensorSymbol.

Declaration

Swift

public func paramSymbols() -> [GraphSymbol]

Return Value

Array of GraphSymbol