OpenAI has introduced the option of fine-tuning for GPT-3.5 Turbo, enabling artificial intelligence (AI) developers to enhance performance on specific tasks using dedicated data. However, developers have expressed criticism as well as excitement for the development.
OpenAI clarified that through the process of fine-tuning, developers can customize the capabilities of GPT-3.5 Turbo according to their requirements. For example, a developer could fine-tune GPT-3.5 Turbo to create customized code or proficiently summarize legal documents in German, using a dataset sourced from the client's business operations.
You can now fine-tune GPT-3.5-Turbo!
Seems like inference is significantly more expensive (8x more) though.
My guess is that anyone with the ability to deploy their own models won’t be swayed by this. https://t.co/p2LbSq4D2H
The recent announcement has sparked a cautious response from developers. A comment attributed to an X user named Joshua Segeren says that while the introduction of fine-tuning to GPT-3.5 Turbo is intriguing, it's not a comprehensive fix. Based on his observations, improving prompts, employing vector databases for semantic searches, or transitioning to GPT-4 often yield better results than custom training. Furthermore, there are factors to consider like setup and ongoing maintenance costs.
The foundational GPT-3.5 Turbo models commence at a rate of $0.0004 per 1,000 tokens (the fundamental units processed by extensive language models). However, the refined versions through fine-tuning carry a higher cost of $0.012 per 1,000 input tokens and $0.016 per 1,000 output tokens. Additionally, an initial training fee linked to data volume applies.
This feature holds significance for enterprises and developers
Read more on cointelegraph.com