Note: Unlike inference requests, Prompt Render API calls are processed through Portkey’s Control Plane services.
Example: Using Prompt Render output in a new request
Example: Using Prompt Render output in a new request
Here’s how you can take the output from the
render API and use it for making a separate LLM call. We’ll take example of OpenAI SDKs, but you can use it simlarly for any other frameworks like Langchain etc. as well.
