GPT4 is here: What you need to know

The highly anticipated GPT-4 has been released by OpenAI, and it brings a range of new capabilities, performance improvements, and pricing implications. We’ll dive into the key features of GPT-4, the pricing structure, and how you can get access to this powerful language model.

Key New Capabilities

  • Token limit increase: GPT-4 has increased the token limit from 4,000 tokens to an impressive 32,000 tokens, allowing for more comprehensive and meaningful conversations and analyses.

  • Multi-Model (Text + Image Capabilities): GPT-4 now supports multi-modal capabilities, seamlessly integrating text and image understanding to provide richer contextual understanding.

  • Improved reasoning capabilities: GPT-4 showcases significant improvements in its reasoning abilities, making it even more useful for complex tasks and problem-solving.

  • Improved safety: OpenAI has made significant strides in safety, implementing robust measures to mitigate risks associated with harmful and untruthful outputs.

Performance Improvements

OpenAI evaluated GPT4 on a number of human test scenarios as well as ML tests and showed consistently equal or significantly improved performance. Full results can be seen on the OpenAI website. As noted below, these benchmarks provide a general sense on performance, but OpenAI is pushing for community contribution to provide more realistic benchmarks in real-world secnarios

Pricing implications

While not unexpected, GPT4 is the most expensive model to date, as much as 60x more per 1,000 tokens than GPT3.5. This will require developers to think a little bit more about whether GPT4 is really needed to satisfy their use case or if GPT3/GPT3.5 Turbo will produce similar results for a tiny fraction of the cost.

GPT4 Pricing includes a more complex structure to encourage long prompts by charging less for prompts than completions

GPT 3.5 Turbo Pricing

Introduction of Evals

One of the hidden difficulties for AI providers of generalized models is that it’s almost impossible for them to benchmark performance across all real world use cases. To address the challenge of evaluating performance in real-world scenarios, OpenAI introduces Evals, which will help assess the effectiveness of GPT-4 and future models in real-life applications by allowing community users to submit Evaluations within a approved framework manages on Github.

Details on Evals

How to get access

Open AI impressed with a livestream release release Video highlighting some of the new capabilities, you can watch the full video here:

See the full GPT4 release livestream



Previous
Previous

GPT4 March Madness Prediction

Next
Next

How to build an AI SMS ChatBot