Skip to main content

New models, faster response and a new documentation site

ยท 2 min read
Huy Tran

It's been a while since the last update. Over the past few months, we've been working hard to improve the product and the experience for our users and constantly release new updates, but it's time for an update post.

๐Ÿš€ New models and faster responseโ€‹

We've been rewriting the whole chat streaming backend to make it faster and more stable.

Our model selection has been updated with new models, including GPT-3.5 Turbo 16k and GPT-4. Here's the full list of models we're supporting:

  • GPT-3.5 Turbo 16k: The default model for all users is now GPT-3.5 Turbo 16k, which supports 16k tokens, replacing the old 4k tokens model. That mean bigger and more complex diagrams.
  • GPT-4: The most capable model, very good at logical reasoning and creativity. This model has the context window of 8k tokens.
  • GPT-4 Turbo 128k: GPT-4 with the context window of 128k tokens. This model is still in Preview mode, so it may not be stable, and will be rate limited.

๐Ÿ“– New documentation siteโ€‹

We've also released a new documentation site at docs.chatuml.com. This site will be the home for all the tutorials and guides for ChatUML.

๐Ÿคซ One more thing...โ€‹

A few months back, we celebrated our 5000th users. As of today, we just surpassed 150,000 users ๐ŸŽ‰. We're so happy to see the community growing and we're so grateful for all the support from our users.

As a thank you, we have a little gift for all users. Use the code FRIENDS150 to get 30% off when purchasing any package. This code is valid until 11:59 PM Feb 29, 2024 (PST), and can only be applied once per user.