Replies: 1 comment 1 reply
-
This section of the documentation might be exactly what you are looking for |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I have trained and exported my model like so:
Then I want to run inference on the GPU like so:
First, is this the correct way to prepare an image for inferencing?
And: Do I have to resize the image to (256, 256) as well before I call
predict()
, or is this done automatically by the model?Beta Was this translation helpful? Give feedback.
All reactions