Python – Pass additional parameters in the calling function of the custom keras layer

Pass additional parameters in the calling function of the custom keras layer… here is a solution to the problem.

Pass additional parameters in the calling function of the custom keras layer

I created a custom keras layer with the aim of manually changing the activation of the previous layer during inference. Below is the basic layer of simply multiplying the activation value with the number.

import numpy as np
from keras import backend as K
from keras.layers import Layer
import tensorflow as tf

class myLayer(Layer):

def __init__(self, n=None, **kwargs):
        self.n = n
        super(myLayer, self).__init__(**kwargs)

def build(self, input_shape):

self.output_dim = input_shape[1]
        super(myLayer, self).build(input_shape)

def call(self, inputs):

changed = tf.multiply(inputs, self.n)

forTest  = changed
        forTrain = inputs

return K.in_train_phase(forTrain, forTest)

def compute_output_shape(self, input_shape):
        return (input_shape[0], self.output_dim)

When I use the IRIS dataset like this, it works fine

model = Sequential()
model.add(Dense(units, input_shape=(5,)))
model.add(Activation('relu'))
model.add(myLayer(n=3))
model.add(Dense(units))
model.add(Activation('relu'))
model.add(Dense(3))
model.add(Activation('softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc'])
model.summary()

But now I want to move the “n

” from init to the calling function so that I can apply different n values to evaluate the model after training. The idea is to replace n with a placeholder that can be initialized with some value before calling the evaluate function. I’m not sure how to achieve this. What is the right way?
Thanks

Solution

You should start with Concatenate Work the layer in the same way.

These layers with multiple inputs depend on the inputs (and input shapes) passed in the list.

Check out the verification section in build, call, and comput_output_shape:

def call(self,inputs):
    if not isinstance(inputs, list):
        raise ValueError('This layer should be called on a list of inputs.')
    
mainInput = inputs[0]
    nInput = inputs[1]

changed = tf.multiply(mainInput,nInput)
    #I suggest using an equivalent function in K instead of tf here, if you ever want to test theano or another backend later. 
    #if n is a scalar, then just "changed=nInput * mainInput" is ok

#.... the rest of the code....

Then you call this layer to pass a list to it. But for this, I strongly recommend staying away from the Sequential model. They are pure limitations.

from keras.models import Model

inputTensor = Input((5,)) # the original input (from your input_shape)

#this is just a suggestion, to have n as a manually created var
#but you can figure out your own ways of calculating n later
nInput = Input((1,))
    #old answer: nInput = Input(tensor=K.variable([n]))

#creating the graph
out = Dense(units, input_shape=(5,),activation='relu')(inputTensor)

#your layer here uses the output of the dense layer and the nInput
out = myLayer()([out,nInput])
    #here you will have to handle n with the same number of samples as x. 
    #You can use `inputs[1][0,0]` inside the layer

out = Dense(units,activation='relu')(out)
out = Dense(3,activation='softmax')(out)

#create the model with two inputs and one output:
model = Model([inputTensor,nInput], out)
    #nInput is now a part of the model's inputs

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc'])

With the old answer, with Input(tensor=...), the model doesn’t ask you to pass 2 inputs to the fit and predict methods as usually happens.

But with the new option, using Input(shape=...) it will require two inputs, so:

nArray = np.full((X_train.shape[0],1),n)
model.fit([X_train,nArray],Y_train,....)

Unfortunately, I can’t get it to work with n with only one element. It must have exactly the same number of samples as (which is the keras limit).

Related Problems and Solutions