Set Up RunPod for NLP Inference
I'm seeking an expert to help me configure a RunPod instance. I have a Docker pod and a pre-trained 100GB model outside the Docker. I want to run inference on the model. I am having trouble connecting the docker to the model... (Budget: min $50 AUD, Jobs: Cloud Computing, Docker, Python, Ubuntu)
