HTTPS outcall to an LLM

I was wondering if anyone has experience with this. Given LLMs are non-deterministic in their responses we could be in a situation where every node in the subnet get a different response, albeit semantically correct. How should I think about this? Thanks in advance.

1 Like

Yes, @peterparker has created a very nice demo using Juno making an HTTPS outcall to the OpenAI API. You can find more here: Juno <> OpenAI demo

2 Likes

Correct, responses were not deterministic, so I built a proxy to cache a response. Note that it was just an experiment.

If interesting, I recently listed the common technical requirements and proxy solutions for HTTPS outcalls on Juno docs: https://juno.build/docs/guides/rust#technical-requirements

2 Likes