Queen Mobile Blog

Chấn động khi Sam Altman quay trở lại OpenAI, đòi hỏi sự đa dạng và niềm tin cấp thiết

Thông tin trở lại của Sam Altman tại OpenAI nhấn mạnh nhu cầu cấp thiết về sự tin tưởng và đa dạng Bạn có sẵn sàng làm tăng nhận thức về thương hiệu của bạn? Xem xét trở thành nhà tài trợ cho The AI Impact Tour. Tìm hiểu thêm về cơ hội ở đây.

Công bố của OpenAI đêm qua dường như giải quyết vấn đề mà hãng đã phải đối diện trong vòng 5 ngày qua: Họ đang đưa Sam Altman trở lại với tư cách Giám đốc điều hành, và họ cũng đã đồng ý với ba thành viên ban đầu của ban giám đốc – và còn nhiều điều hơn nữa sẽ đến. Tuy nhiên, khi có thêm thông tin từ nguồn tin cậy về những gì gây ra sự hỗn loạn tại công ty, rõ ràng công ty cần củng cố vấn đề tin tưởng mà có thể gây ra những phiền toái cho Altman do hành động gần đây của ông ta tại công ty. Cũng không rõ họ cắt lót như thế nào khi còn tồn tại những vấn đề quản trị nhiểu gai, bao gồm cấu trúc và nhiệm vụ của ban giám đốc, mà đã trở nên rối rắm và thậm chí là mâu thuẫn.

Đối với gia đình doanh nghiệp, các nhà quyết định, đang theo dõi vụ việc này và tự hỏi điều này có ý nghĩa gì đối với họ, và với tính chân thực của OpenAI trong tương lai, đáng đáng xem xét các chi tiết về việc chúng ta đã ở đây như thế nào. Sau khi làm như vậy, đây là điều mà tôi đã đến: Kết quả, ít nhất nhìn vào tình hình hiện tại, tường tận con đường của OpenAI đang di chuyển về tầm nhìn mạnh mẽ hơn như một công ty hướng sản phẩm. Tôi dự đoán rằng vị trí của OpenAI như là một đối thủ nghiêm trọng trong việc cung cấp các sản phẩm trí tuệ nhân tạo toàn diện cho doanh nghiệp, một vai trò đòi hỏi sự tin cậy và an toàn tối ưu, có thể giảm bớt đi. Tuy nhiên, các mô hình ngôn ngữ của họ, cụ thể là ChatGPT và GPT-4, có lẽ sẽ vẫn rất phổ biến với các nhà phát triển và tiếp tục được sử dụng như các giao diện lập trình ứng dụng trong một loạt các sản phẩm trí tuệ nhân tạo.

Việc lái xe với Toner không phải là một cái nhìn đẹp, xét về nhiệm vụ sáng tạo và yêu cầu của ban giám đốc, là tạo ra trí tuệ nhân tạo an toàn để hưởng lợi cho “nhân loại, không phải nhà đầu tư OpenAI.” Chúng ta hãy chờ xem hãng sẽ tiếp tục phát triển ra sao.

Nguồn: https://venturebeat.com/ai/sam-altmans-return-to-openai-highlights-urgent-need-for-trust-and-diversity/

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.


OpenAI’s announcement last night apparently resolved the saga that has beset it for the last five days: It is bringing back Sam Altman as CEO, and it has agreed on three initial board members – and more is to come.

However, as more details emerge from sources about what set off the chaos at the company in the first place, it’s clear the company needs to shore up a trust issue that may potentially bedevil Altman as a result of his recent actions at the company. It’s also not clear how it intends to clean up remaining thorny governance issues, including its board structure and mandate, that have become confusing and even contradictory

For enterprise decision makers, who are watching this saga, and wondering what this all means to them, and about the credibility of OpenAI going forward, it’s worth looking at the details of how we got here. After doing so, here’s where I’ve come out: The outcome, at least as it looks right now, heralds OpenAI’s continued shift toward a more aggressive stance as a product-oriented business. I predict that OpenAI’s position as a serious contender in providing full-service AI products for enterprises, a role that demands trust and optimal safety, may diminish. However, its language models, specifically ChatGPT and GPT-4, will likely remain highly popular among developers and continue to be used as APIs in a wide range of AI products.

More on that in a second, but first a look at the trust factor that hangs over the company, and how it needs to be dealt with.

VB Event

The AI Impact Tour

Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!

 


Learn More

The good news is that the company has made strong headway by appointing some very credible initial board members, Bret Taylor and Lawrence Summers, and putting some strong guardrails in place. The outgoing board has insisted that an investigation be made into Altman’s leadership, and has blocked Altman and his co-founder Greg Brockman’s return to the board, and have insisted that new board members be strong enough to be able to stand up to Altman, according to the New York Times.

Altman’s criticism of board member Helen’s Toner’s work on AI safety

One of the main spark points for the board’s wrath against Altman reportedly came in October, when Altman criticized one of the board members, Helen Toner, because he thought a paper she had written was critical of Open AI, according to earlier reporting by the Times.

In the paper, Toner, a director of strategy at Georgetown University’s Center for Security and Emerging Technology, included a three-page section that was a detailed and earnest account of the way OpenAI and a major competitor Anthropic approached the release of their latest large language models (LLMs) in March of 2023. OpenAI chose to release its model, in contrast with Anthropic, which chose to delay its model, called Claude, because of concerns about safety. 

The most critical paragraph (on page 31) in Toner’s paper carries some academic wording, but you’ll get the gist:  

Anthropic’s decision represents an alternate strategy for reducing “race-to-the-bottom” dynamics on AI safety. Where the GPT-4 system card acted as a costly signal of OpenAI’s emphasis on building safe systems, Anthropic’s decision to keep their product off the market was instead a costly signal of restraint. By delaying the release of Claude until another company put out a similarly capable product, Anthropic was showing its willingness to avoid exactly the kind of frantic corner-cutting that the release of ChatGPT appeared to spur.

After complaining to Toner about this, Altman messaged colleagues saying he had reprimanded her because it was dangerous to the company, especially at a time when the FTC was investigating OpenAI’s usage of data, according to a source quoted by the Times.

Toner then reportedly disagreed with the criticism, saying it was an academic paper that researched the complexity in the modern era of how companies and countries signal their intentions in the market. Senior OpenAI leaders then discussed whether Toner should be removed, but co-founder Ilya Sutskever, who was deeply concerned about the risks of AI technology, sided with other board members to instead oust Altman for not being “consistently candid in his communications with the board.”

All of this came after some previous board frustrations with Altman about his moving too quickly on the product side, with other accounts suggesting that the company’s recent DevDay was also a major frustration for the board.

Altman’s stand-off with Toner was not a good look, considering the company’s founding mission and board mandate, which was to create safe artificial general intelligence (AGI) to benefit “humanity, not OpenAI investors.”

This background helps to explain how the company came to its decision last night about the conditions of bringing Altman back. After days of back and forth, Toner and another board member Tasha McCauley agreed yesterday to step down from the board, the Times’ sources said, because they agreed the company needed a fresh start. The board members feared that if all of them stepped down, it would suggest the board was admitting error, even though the board members thought they had done the right thing.

A board primed for growth mission

So they decided to keep the remaining board member who had stood by the decision to oust Altman: Adam D’Angelo. D’Angelo did most of the negotiating on behalf of the board with outsiders, which included Altman and the interim CEO until last night, Emmett Shear. The other two initial board members announced by the company, Taylor and Summers, have impressive credentials. Taylor is as Silicon Valley establishment as you can get, having sold a $50 million business to Facebook, where he was CTO, having also served at Google, and then later becoming co-chief executive of Salesforce; Lawrence Summers is a former U.S. Treasury secretary, with an excellent track record for steering the economy.

Which brings me back to the point about where this company is headed, or at least seems to be headed given the outcome so far: toward an awesome product company. You can’t really start with a more rock-star board than this, when it comes to growth orientation. D’Angelo, a former early CTO of Facebook, and co-founder of Quora, and Taylor, have stellar product chops.

Given the various cards each player had in this game, the outcome appears to have a certain logic to it, despite the appearance of a very messy process and apparent incompetence. 

Jettisoning two members of the board that had most espoused a philosophy of effective altruism and (EA) also appears to have been a necessary outcome here for the OpenAI to proceed as a viable company. Even one of the most prominent backers of the EA movement, Skype co-founder Jaan Tallinn, recently questioned the viability of running companies based on the philosophy, which is also associated with a fear about the risks AI poses to humanity.  

“The OpenAI governance crisis highlights the fragility of voluntary EA-motivated governance schemes,” Tallinn told Semaphor. “So the world should not rely on such governance working as intended.”

Whether Tallinn is actually correct on this point isn’t exactly clear. As the example of Anthropic shows, it may be possible to run an EA-led company. But in OpenAI’s case, as least, there was enough friction that something needed to change.

Diversity required

In its statement last night, the company said: “We are collaborating to figure out the details. Thank you so much for your patience through this.” The deliberation is a good sign, as the next steps will require that the company put together an expanded board of directors that is equally as credible as the first three – if this company expects to stay on its massive success trajectory. A reputation for fairness and thoughtfulness is critically important, when it comes to the needs for AI safety. And diversity, of course: As a reminder, Summers was forced to resign from Harvard president because of some comments he made about the reasons for under-representaton of women in science and engineering (including the possibility that there exists a “different availability of aptitude at the high end”).

Conclusion

We’ll see over the next few days how the company puts the remaining pieces together, but for now the company looks set to move toward a more established, for-profit, product direction. 

From our reporting over the last few days and months, though, it appears that OpenAI is headed in the direction of working at scale for hundreds of millions of people, with general purpose LLMs that millions of developers will love, and which will be good at many tasks. But its LLMs won’t necessarily be capable, or trusted, to do the task-specific, well governed, safe, unbiased, and fully orchestrated work that enterprise companies will need AI to do. There, many other companies will fill the void

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Exit mobile version