Today, we shared dozens of new additions and improvements, and reduced pricing across many parts of our platform. These include:

  • New GPT-4 Turbo model that is more capable, cheaper and supports a 128K context window
  • New Assistants API that makes it easier for developers to build their own assistive AI apps that have goals and can call models and tools
  • New multimodal capabilities in the platform, including vision, image creation (DALLΒ·E 3), and text-to-speech (TTS)

We'll begin rolling out new features to OpenAI customers starting at 1pm PT today.

continue reading on openai.com

⚠️ This post links to an external website. ⚠️