There is no slowing the meteoric rise of generative AI, and OpenAI has just rolled out its most powerful model yet, ChatGPT-4. What’s different with the latest iteration?
Of the many Silicon Valley giants vying to own the generative AI space, Microsoft has struck gold with OpenAI’s ChatGPT.
Following its full-scale integration with company-owned search engine Bing, OpenAI has unveiled the fourth iteration of its AI language model, ChatGPT-4, which is attracting huge commercial interest already.
Touted as being ‘more creative and collaborative than ever before,’ the tech has already been picked up by several companies’ development teams.
Morgan Stanley has put it to organising its wealth management data, for instance, while Stripe Inc is autonomously refining its combat fraud barriers. Even foreign language-learning app Duolingo is using ChatGPT-4 to help users practice real-world conversations and explain mistakes.
Announcing GPT-4, a large multimodal model, with our best-ever results on capabilities and alignment: https://t.co/TwLFssyALF pic.twitter.com/lYWwPjZbSg
— OpenAI (@OpenAI) March 14, 2023
These are but a few early examples of how the generative language AI is disrupting global job markets, but just how much better is ChatGPT-4 than its predecessor?
The distinction has been described by a company announcement as ‘subtle,’ with the key upgrade being its new multi-modal feature. What this means, is that the new system can accept both text and image inputs – unlike OpenAI’s image generator DALL-E, however, the latter stimulus will be interpreted or solved with the response coming back purely in text.
Whether the user wants a visual mathematics problem solved, or the nuance of an abstract meme explained, ChatGPT-4 navigates the prompts brilliantly. See the below example, and how eerily human-like the clarification is.