OpenAI has confirmed that it is hosting an event on May 13 to showcase new ChatGPT and GPT-4 features. In a post on social media platform X (formerly Twitter), the company said that the event will be livestreamed on OpenAI’s website at 10AM PT (10:30 PM IST) on May 13 to “demo some ChatGPT and GPT-4 updates”. According to reports, OpenAI will announce its new multimodal AI model that can talk back to users and even recognise objects.
We’ll be streaming live on https://t.co/OcO6MLUYGH at 10AM PT Monday, May 13 to demo some ChatGPT and GPT-4 updates.
— OpenAI (@OpenAI) May 10, 2024
Last week, Bloomberg reported that OpenAI is developing a feature for ChatGPT that can search the web and cite sources in its results. However, OpenAI’s CEO in a post on X said that the company is not planning to announce an AI-powered search engine at the event, and neither the next-generation GPT-5 model is likely to be unveiled on May 13. “Not gpt-5, not a search engine, but we’ve been hard at work on some new stuff we think people will love! Feels like magic to me,” Altman said in his post.
Also Read
not gpt-5, not a search engine, but we’ve been hard at work on some new stuff we think people will love! feels like magic to me.
— Sam Altman (@sama) May 10, 2024
monday 10am PT. https://t.co/nqftf6lRL1
OpenAI event: What to expect
According to a report by The Verge, citing The Information, OpenAI has been testing its new AI model with multimodal capabilities, this essentially means that the new model will be capable of processing information from different modalities, including images, videos, and text. As per the report, the new model reportedly offers faster, more accurate interpretation of images and audio than current generation models. This will help customer service bots to “better understand the intonation of callers’ voices or whether they’re being sarcastic.“ Additionally, the model will be able to solve mathematical problems and translate “real-world signs.”
It is also likely that ChatGPT will get new abilities including a phone call feature. According to The Verge, ChatGPT has codes suggesting that users will soon be able to make phone calls using the AI powered chatbot. There is also evidence that OpenAI had provisioned servers intended for real-time audio and video communication.
While the calling feature could be inducted into the smartphone app for ChatGPT, the new multimodal AI model will not be GPT-5 as CEO Sam Altman has denied it in his post. However, it could be regarded as an improved GPT-4 model similar to GPT-4 Turbo.