I can't find reshape method like Caffe's net.blobs['data'].reshape in class NeuralNetwork, so I can't change the input shape when running a model.This makes fcn not working. What can I do to resolve this problem? Thx!
I can't find reshape method like Caffe's net.blobs['data'].reshape in class NeuralNetwork
Posted: Wed, 2017-10-25 20:52
Hi. Thanks for checking out SNPE and apologies for the slow response. I'm not sure I underestand exatcly what problem you are having. SNPE does support reshape layers from Caffe. Do you have a model that contains a reshape layer that will not convert via the snpe-caffe-to-dlc converter? If this is your issue, could you attach the section of the Caffe prototxt that has the reshape layer or attach the error message the converter produces when you try to convert the model? If this is not the issue, could you provide more details on exactly what you are doing that does not work?
Thanks.
Thanks for your reply. I don't have a reshape layer in my caffe model. And I have converted caffe model to dlc file successfully.
Actually, my model is a FCN(Fully Convolutional Networks). The input of FCN is arbitrary. This is the beginning of my prototxt:
When I use caffe, I will reshape the model use "net.blobs['data'].reshape" when the input is not equal to (22,22). So I can use any size of input data.
But when I use snpe with dlc file, I can only use input of size(22,22), I didn't find a method to reshape the model.
Did I discribe the problem clearly? Please let me know. Thank you very much.
Thanks. I understand what you're asking now.
Unfortunately, I don't have a solution for you. At this point, SNPE only supports a fixed input size (that is defined at model creation/conversion time), that cannot be changed at runtime. Your only option might be to write your own image scaling code that scales your vairable size input image to the size your network expects, but I don't know if this will work for particular use case.
I do understand that a variety of networks (e.g. fully convolutional ones) should be able to easily support variable input sizes regardless of the size used at model creation/conversion. We have gotten similars request from other users. While it is not our policy to discuss our roadmap and future feature timelines, this is a feature we are actively investigating.
Let me ask you a question about this feature: If we provided the ability to set the input size at runtime at network initialization (regardless of what size you used in model creation/conversion), would you need to vary the input size at each inference or would it be ok to set the input size once but you would need to re-initialize SNPE to change it?
Thanks.
Thank you for your reply.
If you provided the ability to set the input size at runtime at network initialization , I think I need to vary the input size at each inference. Because for each input image , I will infer several times(one time for each size), and re-initialize SNPE will reload the model and cost some time. If I re-initialize SNPE every time, the time performance will be poor.
Thanks.
Ability to change the input size for a run is very important from a performance point of view. It would be prohibitively expensive to initialize the network each time the input dimensions change. For performance reasons, it would be great if SNPE API allows to change the input size without initializing the network.
Thanks!
Thanks for the feedback. It's helpful to know what people want as we develop new features.
Given how trivial it is to reshape an input image, network input resizing seems lika a low priority feature.
Rex
I also have this need, when can support this function yet?
I also have this need, when can support this function yet?
I also have this need, when can support this function yet?
Hi ,friend, Have you got right way to this issue?
*/I have this requirement too.
Is there any update? Thanks~