Python – Tensorflow – Avoid tensor size limitations

Tensorflow – Avoid tensor size limitations… here is a solution to the problem.

Tensorflow – Avoid tensor size limitations

I’m working on the implementation of the FCN-32 network described in Long, Shelhamer paper. , but encountered a hurdle in upsampling. To upsample to the original size, the other implementations use with a kernel size of 64×64 conv2d_transpose layers of bilinear filters. This is all good until you start using a large number of classes.

For any number of classes >~375, the filters variable in the transpose layer > 2 GB (64 x 64 x (>375) x (>375)) so Tensorflow prompts and dies, says

ValueError: Unable to create a prototype with content larger than 2GB.

Is there any way to avoid this size limit? My first thought was to generate tensors, but I couldn’t find any documentation on how to create such a construct in the presence or possible existence.

Solution

You can split the output class into multiple operations and connect them at the end.

Backprop will work normally with the concat operation. It should be as simple as creating two conv2d_transpose operations, each with half the class and fitting the local connection result and continuing to lose the function from there.

Creating more than 2 conv2d_transpose operations as needed is also effective.

With that in mind, I’m sure it will work. Please let me know if there are questions and I will update the answer.

Related Problems and Solutions