This is a great upgrade, congrats!!
And right in time for WCHL 
I actually have an important feedback from the Bootcamps / Hackathons. Many teams are “stumbling” on the moment they also want to use OpenAI or any other API and they face the:
- extra calls of HTTP Outcalls.
- the difficulty it is to handle different replies from the API model.
I tell them to use the same pattern of AI Worker, but to all, it’s a “hassle” that they aren’t “overcoming” for the duration of a Hackathon. So they are always limited to the models and tools you provide.
But for the future, and to solve this once and for all, thought that it would be great if there was an easy way to “generalize” / “open” the LLM Canister and AI Worker that is fulfilling its requests.
The LLM Canister is open sourced, but not the AI Worker right? Could a version be developed, where it’s stored as a docker image and the teams only need to add an Open AI API Key and deploy it to some off chain cloud (like AWS, Digital Ocean, Vercel, Supabase, etc.).
It only needs to have the same methods/tools as already the Dfinity Models support. If the team wants to use other tooling, they can fork and expand the source code and build a new docker image.
Could this be thought internally? Having the models is already great, and serves maybe 50% of the cases. But for competitive startups, that would like to provide the best AI model out there, having an API friendly version, would be great
And that way, think we would satisfy 99% of the needs.
Just some food for thought 
PS.: tagging @ielashi and @aespieux for visibility.