Cập nhật mùa xuân từ OpenAI: nơi xem, khi nào, điều gì đặt ra Join chúng tôi quay trở lại NYC vào ngày 5 tháng 6 để cùng khám phá các phương pháp toàn diện cho việc kiểm tra mô hình AI về vấn đề thiên vị, hiệu suất và tuân thủ đạo đạo đạo đạo đạo đạo đạo đạo đạo đạo đạo đạo đạo đạo đạo đạo đạo đạo Find out how you can attend here. Hôm nay là ngày định mệnh. Theo các bài đăng của Sam Altman, CEO của OpenAI, trên X/Twitter vào thứ Sáu, công ty đã đưa vào thời kỳ AI sinh sản sẽ công bố các cập nhật lớn cho chatbot ký hiệu của mình là ChatGPT và mô hình ngôn ngữ lớn GPT-4. Khi và ở đâu sự kiện Cập nhật Mùa xuân từ OpenAI sẽ diễn ra? Sự kiện sẽ bắt đầu vào lúc 10 giờ sáng giờ Thái Bình Dương/1 giờ chiều giờ Đông và sẽ được phát trực tiếp trên trang web của OpenAI và kênh YouTube của họ. Đã có hơn 5,000 người xem chờ đợi sự kiện bắt đầu. Thời gian để bắt đầu luồng YouTube sớm hơn, vào lúc 9 giờ sáng giờ Thái Bình Dương/12 giờ trưa giờ Đông, mặc dù các bài thuyết trình và nội dung thực sự không kỳ vọng bắt đầu cho đến một giờ sau. Để biết thêm thông tin về cách tham gia, hãy truy cập trang web của OpenAI. #OpenAISpringUpdates #OpenAIEvent #AIModelAudit
Nguồn: https://venturebeat.com/ai/openai-spring-updates-event-where-to-watch-when-what-to-expect/
Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing AI models regarding bias, performance, and ethical compliance across diverse organizations. Find out how you can attend here.
Today is the day.
As per OpenAI co-founder and CEO Sam Altman’s posts on X/Twitter on Friday, the company that ushered in the generative AI age is set to announce big updates to its signature chatbot ChatGPT and the underlying GPT-4 large language model (LLM) that powers it.
When and where will OpenAI’s Spring Update take place?
The event is set to kick off at 10 am Pacific / 1 pm Eastern Time and will be livestreamed on OpenAI’s website (openai.com) and its YouTube channel. Already, as of the time of this article’s posting, more than 5,000 viewers are eagerly awaiting the event start.
The time for the YouTube stream to start is actually earlier, at 9 am PT/12 pm ET, though the actual presentations and content are not expected to start till an hour later.
What to expect from OpenAI’s Spring Updates event
While reports and rumors had swirled for weeks — including one from Reuters on May 10 suggesting OpenAI would release a search engine or product to rival Google, which is hosting its own I/O conference this week, and upstart AI unicorn Perplexity — Altman put the kibosh on those on Friday, stating on X, “not gpt-5, not a search engine, but we’ve been hard at work on some new stuff we think people will love! feels like magic to me.”
The sentiment was echoed by OpenAI co-founder and president Greg Brockman in his own post on X, stating vaguely OpenAI would produce a “live demo of some new work.”
If not a search engine product, what could OpenAI launch instead?
Activity by OpenAI employees on X has hinted at a conversational audio/voice assistant reminiscent of the character Samantha in the 2006 sci-fi movie “Her,” portrayed by the voice of Scarlett Johansson.
Altman “Liked” a post on X from self-described “College student, anarcho-capitalist, techno-optimist” Spencer Schiff that said: “Currently watching Her to prepare for Monday.”
Meanwhile, at least 10 different OpenAI researchers and engineers have posted cryptic updates on X stating they are all excited for what one another are presenting, as shown in his handy video montage from AI influencer “@SmokeAwayyy.”
Among those OpenAI employees who have posted about their excitement for the event are: Aidan Clark, a researcher and former Google DeepMind employee; Mo Bavarian, a research scientist whose X account bio says they are working on optimization and architecture of LLMs at OpenAI; researcher Rapha Gontijo Lopes; technical staff member Chelsea Sierra Voss; Steven Heidel, described as working on fine-tuning at OpenAI; technical staff member Bogo Giertler; and technical staff member Javier “Javi” Soto.
Altman also answered questions from the public using his verified account on Reddit on Friday related to OpenAI’s “Model Spec,” a new document published last week showing how it wants its AI products to behave.
In the course of that discussion, Altman answered a question from user “ankle_biter50” asking: “Will you making this new model mean that we will have chatGPT 4 and the current DALL-E free?” with a side eye/looking emoji, indicating there may be some truth to this idea.
Adding to speculation that OpenAI will be releasing a voice assistant of some kind is an observation from user “@ananayarora” on X pointing out that, looking at the source code on its website, OpenAI has “webRTC servers in place” to allow “phone calls inside of chatGPT” (thanks to Eluna.AI for spotting it).
In addition, X user “@testingcatalog” posted that the ChatGPT iOS update appears to have been updated recently as well with a new conversational user interface.
OpenAI has offered a ChatGPT with Voice interface on its iOS and Android apps since December 2023, allowing for the chatbot to speak to users back and forth and take their voice queries through their smartphones, automatically attempting to detect when the user pauses or finishes speaking. It’s accessible in the mobile apps by clicking the headphones icon beside the input text bar.
OpenAI also gave ChatGPT the ability to “speak” responses to user prompts through AI generated audio voices back in March 2024, adding a new feature called “Read Aloud” to its iOS, Android, and web apps that can be accessed by clicking the small “speaker” icon below a response.
The company further showcased a demo of its voice cloning technology later that month, which can reproduce a human speaker’s voice convincingly on new typed text using only a 15-second recording of the original speaker. However, like its realistic AI video generator Sora, the OpenAI “Voice Engine” remains not publicly released for now, and only accessible to selected third-party groups and testers per OpenAI’s desire to avoid or limit misuse.
Presumably, a new audio conversational assistant from OpenAI would build upon these earlier demos, releases, and technologies, and would perhaps allow a human user to “talk” to ChatGPT in a more naturalistic and humanlike open-ended conversation, without waiting for the assistant to detect a pause or finished query.
Perhaps relatedly, X user “@alwaysaq00” found code on OpenAI’s website indicating the presence of a new GPT-4 Omni model, aka GPT-4o.
Whatever OpenAI announces today, it will be sure to turn heads and leave the AI and broader tech industry discussing the ramifications for hours and likely days and weeks to come. Will it live up to the already surging hype — despite not being GPT-5?
Stay tuned and watch the event along with us at 10 am ET/1 pm ET and we’ll be reporting on it here live.