The global tech industry is witnessing a milestone as the much-anticipated GPT-4 API from OpenAI becomes widely accessible to all paying customers.
The GPT-4 API has been desired by developers worldwide since its introduction in March, but to date, it has only been available to select customers. Now, these developers will have the opportunity to leverage the capabilities of GPT-4, which is currently heralded as the most proficient model available.
GPT-4 for all
The GPT-4 API provides a context window of 8K, which refers to the amount of textual information the model can “consider” or “remember” when generating a response. This 8K context window essentially means the model considers the last 8,000 tokens (roughly 8,000 words or characters, depending on the language) to generate its output. This feature is crucial for maintaining consistency and coherence in the model’s responses.
The GPT-4 API is accessible to existing API developers with a successful payment history. OpenAI plans to extend access to new developers by the end of this month, based on compute availability, followed by an increase in the rate limits.
Additional API releases & developments
The company is also making the GPT-3.5 Turbo, DALL·E (image generation), and Whisper (audio) APIs generally available, suggesting an increased readiness of these models for production-scale use. Further, OpenAI is working towards enabling fine-tuning for GPT-4 and GPT-3.5 Turbo, a feature that developers can anticipate later this year.
Fine-tuning, in the context of AI models, refers to taking a pre-trained model (one that has already learned general patterns from a large dataset) and customizing or ‘tuning’ it on a more specific task or dataset.
This method allows developers to leverage the base model’s broad learning while tailoring its behavior to suit their unique requirements better, enhancing accuracy and efficiency in their specific applications. This anticipated feature is something developers can look forward to within the year.
Chat Completions
The rise of the chat-based model used in GPT-4 has been significant. Since the introduction of the Chat Completions API in March, it now accounts for 97% of OpenAI’s GPT usage, effectively replacing the older freeform text prompt-based Completions API. The shift towards a more structured chat-based interface has proven to be a game-changer, offering more flexible, specific, and superior results.
However, these improvements do not come without a price. OpenAI has announced a deprecation plan for older models of the Completions API.
As of Jan. 4, 2024, older completion models will be retired and replaced with new models, this is part of OpenAI’s increased investment in the Chat Completions API and its efforts to optimize compute capacity.
“While this API will remain accessible, we will label it as “legacy” in our developer documentation starting today.”
Developers who wish to continue using their fine-tuned models beyond Jan. 4, 2024, will need to fine-tune replacements atop the new base GPT-3 models or the more recent models like gpt-3.5-turbo and gpt-4.
Embeddings models deprecated
OpenAI has also signaled the deprecation of older “embeddings” models in line with these developments. Users must migrate to “text-embedding-ada-002” by Jan. 4, 2024. OpenAI has reassured developers using the older models that they will cover the financial cost of re-embedding content with these new models.
As OpenAI leads this monumental shift, it also raises questions about the future of older models and the implications for developers and companies who rely on them. This historical pivot in AI development underlines the swift and relentless pace of technological innovation shaping the future of industries worldwide.
Comments (No)