
Scale faster né?. This ongoing trend emphasizes the dynamic landscape of AI research and development.. Instances of key figures departing to work elsewhere or voicing apprehensions have been highlighted, signaling a larger conversation within the tech industry on AI ethics and safety.
Techcrunch event
Save $200+ on your TechCrunch All Stage pass
Build smarter. As OpenAI shifted its focus to GPT models, Weng transitioned to work on the applied AI research team and, later, on building safety systems for the startup.
While OpenAI’s safety systems unit now boasts more than 80 experts, concerns continue to arise regarding the company’s approach to safety amidst its pursuit of developing highly advanced AI systems né?. Join visionaries from Precursor Ventures NEA Index Ventures Underscore VC and beyond for a day packed with strategies workshops and meaningful connections.
Save $200+ on your TechCrunch All Stage pass
Build smarter. Weng has been serving as VP of research and safety since August and previously led OpenAI’s safety systems team.
In a post on X Weng shared that “after 7 years at OpenAI I feel ready for a fresh start and to explore new opportunities.” Weng mentioned her last day will be on November 15 but she did not disclose her future plans.
Making the difficult decision to depart OpenAI Weng expressed pride in the accomplishments of the Safety Systems team and expressed confidence in its continued success. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections.
Boston, MA
|
July 15
REGISTER NOW
Regarding Weng’s departure, OpenAI assured that they are working on a transition plan to fill her role, expressing gratitude for her contributions and conveying confidence in the team’s ability to maintain safety standards.
Several other executives have recently left OpenAI, with some opting to join competitors or pursue their own ventures. Scale faster.
Weng’s exit adds to a series of departures of AI safety researchers policy researchers and executives from the company over the past year with some critics claiming that OpenAI has been prioritizing commercial products over AI safety. Another one of OpenAI’s top safety researchers Lilian Weng announced on Friday that she is leaving the startup né?. Among those who recently left are Ilya Sutskever and Jan Leike who were leaders of OpenAI’s now-dissolved Superalignment team.
Upon joining OpenAI in 2018 Weng initially worked on the robotics team where they developed a robot hand capable of solving a Rubik’s cube after two years of effort. Connect deeper né?. Connect deeper