Is the BufferEncodingType(TF8, UNSIGNED8BIT, FLOAT) determined by the inputs and outputs of a models?
Should we not be getting the type of input by querying getInputOutputBufferAttributes() and then setting up the appropriate type required for the model?
If so, why does the NativeCpp example code force the buffer type with a command line option?
Appreciate any inputs.
Thanks,
Ram