3 Sources
[1]
Sam Altman comes out swinging at The New York Times | TechCrunch
From the moment OpenAI CEO Sam Altman stepped onstage, it was clear this was not going to be a normal interview. Altman and his chief operating officer, Brad Lightcap, stood awkwardly toward the back of the stage at a jam-packed San Francisco venue that typically hosts jazz concerts. Hundreds of people filled steep theatre-style seating on Wednesday night to watch Kevin Roose, a columnist with The New York Times, and Platformer's Casey Newton record a live episode of their popular technology podcast, Hard Fork. Altman and Lightcap were the main event, but they'd walked out too early. Roose explained that he and Newton were planning to -- ideally, before OpenAI's executives were supposed to come out -- list off several headlines that had been written about OpenAI in the weeks leading up to the event. "This is more fun that we're out here for this," said Altman. Seconds later, the OpenAI CEO asked, "Are you going to talk about where you sue us because you don't like user privacy?" Within minutes of the program starting, Altman hijacked the conversation to talk about The New York Times lawsuit against OpenAI and its largest investor, Microsoft, in which the publisher alleges that Altman's company improperly used its articles to train large language models. Altman was particularly peeved about a recent development in the lawsuit, in which lawyers representing The New York Times asked OpenAI to retain consumer ChatGPT and API customer data. "The New York Times, one of the great institutions, truly, for a long time, is taking a position that we should have to preserve our users' logs even if they're chatting in private mode, even if they've asked us to delete them," said Altman. "Still love The New York Times, but that one we feel strongly about." For a few minutes, OpenAI's CEO pressed the podcasters to share their personal opinions about the New York Times lawsuit -- they demurred, noting that as journalists whose work appears in The New York Times, they are not involved in the lawsuit. Altman and Lightcap's brash entrance lasted only a few minutes, and the rest of the interview proceeded, seemingly, as planned. However, the flare-up felt indicative of the inflection point Silicon Valley seems to be approaching in its relationship with the media industry. In the last several years, multiple publishers have brought lawsuits against OpenAI, Anthropic, Google, and Meta for training their AI models on copyrighted works. At a high level, these lawsuits argue that AI models have the potential to devalue, and even replace, the copyrighted works produced by media institutions. But the tides may be turning in favor of the tech companies. Earlier this week, OpenAI competitor Anthropic received a major win in its legal battle against publishers. A federal judge ruled that Anthropic's use of books to train its AI models was legal in some circumstances, which could have broad implications for other publishers' lawsuits against OpenAI, Google, and Meta. Perhaps Altman and Lightcap felt emboldened by the industry win heading into their live interview with The New York Times journalists. But these days, OpenAI is fending off threats from every direction, and that became clear throughout the night. Mark Zuckerberg has recently been trying to recruit OpenAI's top talent by offering them $100 million compensation packages to join Meta's AI superintelligence lab, Altman revealed weeks ago on his brother's podcast. When asked whether the Meta CEO really believes in superintelligent AI systems, or if it's just a recruiting strategy, Lightcap quipped: "I think [Zuckerberg] believes he is superintelligent." Later, Roose asked Altman about OpenAI's relationship with Microsoft, which has reportedly been pushed to a boiling point in recent months as the partners negotiate a new contract. While Microsoft was once a major accelerant to OpenAI, the two are now competing in enterprise software and other domains. "In any deep partnership, there are points of tension and we certainly have those," said Altman. "We're both ambitious companies, so we do find some flashpoints, but I would expect that it is something that we find deep value in for both sides for a very long time to come." OpenAI's leadership today seems to spend a lot of time swatting down competitors and lawsuits. That may get in the way of OpenAI's ability to solve broader issues around AI, such as how to safely deploy highly intelligent AI systems at scale. At one point, Newton asked OpenAI's leaders how they were thinking about recent stories of mentally unstable people using ChatGPT to traverse dangerous rabbit holes, including to discuss conspiracy theories or suicide with the chatbot. Altman said OpenAI takes many steps to prevent these conversations, such as by cutting them off early, or directing users to professional services where they can get help. "We don't want to slide into the mistakes that I think the previous generation of tech companies made by not reacting quickly enough," said Altman. To a follow-up question, the OpenAI CEO added, "However, to users that are in a fragile enough mental place, that are on the edge of a psychotic break, we haven't yet figured out how a warning gets through."
[2]
OpenAI execs whine about the New York Times lawsuit and user privacy during live NYT event, get roasted by NYT journalist: 'It must be really hard when someone does something with your data you don't want them to'
OpenAI CEO Sam Altman and COO Brad Lightcap were the guests on a recent live episode of the New York Times' technology podcast, Hard Fork, and entered the San Francisco venue like a pair of WWE wrestlers. I'm not kidding: hosts Kevin Roose and Casey Newton began their introduction before Altman and Lightcap strode out early to take their seats. Roose told the crowd they'd been planning to list some recent NYT headlines about OpenAI to set the scene, and after Altman awkwardly insisted "this is more fun that we're out here for this" the chest-beating began. "Are you going to talk about where you sue us because you don't like user privacy" Altman demanded, as if he were delivering some great zinger, which had the hosts and audience laughing and saying "woo!" like Ric Flair. Altman had clearly arrived in a bit of a mood about the NYT's ongoing lawsuit against OpenAI and Microsoft (the company's largest investor), and the fact that neither of the hosts is actually involved with that suit was immaterial. The NYT's lawsuit against OpenAI alleges that the AI company used the traditional media company's articles in the training of its large language models. Altman's spikiness relates to a new development in which the NYT's lawyers asked that OpenAI be compelled to retain consumer ChatGPT data and API customer data. "The New York Times, one of the great institutions for a long time, is taking a position that we should have to preserve our users' logs even if they're chatting in private mode, even if they've asked us to delete them," said Altman. "And the lawsuit we're having to fight out, but that thing, we think privacy and AI is an extremely important concept to get right for the future, still love you guys, still love The New York Times, but that one we feel strongly about." "Well thank you for your views and I'll just say it must be really hard when someone does something with your data you don't want them to," responds Roose to laughs from the entire room. "I don't know what that's like personally but maybe someone else does." "I was recently told by another guest on this stage that the singularity would be 'gentle'," adds co-host Newton to further laughs. And they're right of course... the dude whose technology is built on scraping as much content as possible is now an advocate for privacy. Altman tried to get the hosts involved in commenting on the lawsuit but they weren't having any of it. "I think people should read the relevant filings and make up their own minds," said Roose as Altman continually asked about the suit. Things somewhat simmered down after this, with the hosts asking about "that rascal Mark Zuckerberg" poaching OpenAI's employees in recent months, as Meta amps up its own investment in AI and the Zuck's belief in superintelligence. Asked if they think Zuckerberg's belief in such technology is sincere, Lightcap said simply: "I think [Zuckerberg] believes he is superintelligent." The rest of the interview sees the OpenAI execs go over some familiar ground, with Altman making his usual overblown claims about everyone now having a "PhD level intelligence" in their pocket. The reported tensions with Microsoft are hand-waved away by Altman as "points of tension" in a relationship that brings "deep value" to both. Towards the end of the chat, Newton asks the pair about recent stories of mentally unwell people using ChatGPT, whether that's users who think they've connected with god, people going into the weeds on conspiracy theories, or individuals with suicidal thoughts. "If conversations are going down a rabbit hole in this way we try to cut them off or suggest something different to the user," begins Altman, adding that ChatGPT will suggest professional services as an alternative. "We don't want to slide into the mistakes that I think the previous generation of tech companies made by not reacting quickly enough to a new thing, a psychological reaction." Asked whether ChatGPT should carry a warning to the effect that it is not in fact god, Altman says "the model will tell you things like that, then users will write us and say 'you changed this.'" But after the hosts follow-up, he does admit that "to users that are in a fragile enough mental place, that are on the edge of a psychotic break, we haven't yet figured out how a warning gets through there." Lightcap then tries to spin towards the positives, claiming ChatGPT has "rehabilitated marriages" and for some people "it's the first time in their life where they've had something they can confide in... and it doesn't cost $1000 an hour." Then, a very relatable anecdote. "I was surfing in Costa Rica the other day," says Lightcap, when he got chatting to a local and mentioned he worked at OpenAI. "And he started crying... [he said] 'ChaptGPT saved my marriage. I didn't know how to talk to my wife, it gave me tips to talk to my wife and I've learned that and we're on a much better path.' It sounds like a dumb and stupid story but it's not, I was there." Press X to doubt. "That's great, we're back to even," cracks Roose, "because a chatbot tried to break up my marriage."
[3]
Sam Altman confronts NYT lawsuit head-on
Sam Altman, OpenAI CEO, and Brad Lightcap, OpenAI COO, appeared at a live recording of the "Hard Fork" podcast in San Francisco, addressing the New York Times lawsuit against OpenAI and other industry challenges. Altman and Lightcap arrived on stage earlier than anticipated at the San Francisco venue, which typically hosts jazz concerts. Kevin Roose, a columnist for The New York Times, and Casey Newton of Platformer, the hosts of the "Hard Fork" podcast, had planned to discuss recent headlines concerning OpenAI before the executives joined them. Altman acknowledged their premature appearance, stating, "This is more fun that we're out here for this." Moments later, Altman initiated a direct query regarding The New York Times lawsuit, asking, "Are you going to talk about where you sue us because you don't like user privacy?" Within minutes of the podcast starting, Altman redirected the conversation to address the lawsuit filed by The New York Times against OpenAI and Microsoft, its primary investor. The lawsuit alleges that OpenAI improperly utilized the publisher's articles to train its large language models. Altman specifically expressed frustration over a recent development in the legal proceedings, where The New York Times' lawyers requested OpenAI to retain consumer ChatGPT and API customer data. Altman stated, "The New York Times, one of the great institutions, truly, for a long time, is taking a position that we should have to preserve our users' logs even if they're chatting in private mode, even if they've asked us to delete them." He concluded this point by adding, "Still love The New York Times, but that one we feel strongly about." For a period, OpenAI's CEO pressed the podcasters for their personal opinions regarding The New York Times lawsuit. Roose and Newton declined to offer an opinion, explaining that as journalists whose work is published by The New York Times, they are not involved in the legal dispute. The initial interaction involving Altman and Lightcap's direct engagement lasted only a few minutes, after which the interview proceeded as planned. This early exchange underscored a significant inflection point in the relationship between Silicon Valley and the media industry. Over recent years, several publishers have initiated lawsuits against prominent AI companies, including OpenAI, Anthropic, Google, and Meta, alleging the unauthorized training of their AI models using copyrighted works. These lawsuits generally contend that AI models possess the capacity to devalue, or even replace, copyrighted content produced by media organizations. However, recent legal developments suggest a potential shift in favor of technology companies. Earlier this week, Anthropic, a direct competitor to OpenAI, secured a significant legal victory in its ongoing dispute with publishers. A federal judge ruled that Anthropic's use of books for training its AI models was permissible under specific circumstances. This ruling could have broad implications for other similar lawsuits filed by publishers against OpenAI, Google, and and Meta. This outcome may have contributed to Altman and Lightcap's assertive demeanor during their live interview with The New York Times journalists. OpenAI, however, continues to navigate challenges from various sources, a reality that became evident throughout the evening's discussion. Mark Zuckerberg has been actively attempting to recruit top talent from OpenAI, offering compensation packages valued at $100 million to entice them to join Meta's AI superintelligence laboratory. Altman had previously disclosed this recruitment strategy weeks prior during an appearance on his brother's podcast. When questioned by the hosts about whether the Meta CEO genuinely believes in superintelligent AI systems or if this is primarily a recruiting tactic, Lightcap responded, "I think [Zuckerberg] believes he is superintelligent." Later in the interview, Roose inquired about the nature of OpenAI's relationship with Microsoft. Reports have indicated increased tensions between the two companies in recent months as they engage in negotiations for a new contract. While Microsoft previously served as a significant accelerator for OpenAI's development, the two entities are now operating as competitors in the enterprise software sector and other domains. Altman acknowledged these dynamics, stating, "In any deep partnership, there are points of tension and we certainly have those." He elaborated further, explaining, "We're both ambitious companies, so we do find some flashpoints, but I would expect that it is something that we find deep value in for both sides for a very long time to come." OpenAI's leadership currently dedicates substantial effort to addressing competitive pressures and ongoing lawsuits. This focus potentially impacts the company's ability to concentrate on broader challenges associated with artificial intelligence, particularly the safe and scalable deployment of highly intelligent AI systems. At one point, Newton posed a question to the OpenAI leaders concerning recent reports of individuals experiencing mental instability utilizing ChatGPT, leading them into dangerous conversational patterns, including discussions about conspiracy theories or suicide with the chatbot. Altman affirmed that OpenAI implements multiple measures designed to prevent such conversations. These measures include prematurely terminating interactions or directing users to professional services where they can access appropriate assistance. Altman stated, "We don't want to slide into the mistakes that I think the previous generation of tech companies made by not reacting quickly enough." In response to a follow-up question, the OpenAI CEO acknowledged a remaining challenge, stating, "However, to users that are in a fragile enough mental place, that are on the edge of a psychotic break, we haven't yet figured out how a warning gets through."
Share
Copy Link
Sam Altman and Brad Lightcap of OpenAI address the NYT lawsuit, user privacy concerns, and industry competition during a live recording of the "Hard Fork" podcast, highlighting tensions between AI companies and traditional media.
In a surprising turn of events, OpenAI CEO Sam Altman and COO Brad Lightcap made an unexpected early appearance at a live recording of the "Hard Fork" podcast in San Francisco. The event, hosted by New York Times columnist Kevin Roose and Platformer's Casey Newton, quickly became a platform for Altman to address the ongoing lawsuit between The New York Times and OpenAI 12.
Source: TechCrunch
Altman wasted no time in confronting the issue, asking, "Are you going to talk about where you sue us because you don't like user privacy?" 1. He expressed particular frustration over a recent development in the lawsuit, where The New York Times' lawyers requested OpenAI to retain consumer ChatGPT and API customer data 13.
The New York Times lawsuit against OpenAI and Microsoft alleges that the AI company improperly used the publisher's articles to train its large language models 12. Altman argued that the lawsuit's request for data retention goes against user privacy, stating, "The New York Times... is taking a position that we should have to preserve our users' logs even if they're chatting in private mode, even if they've asked us to delete them" 13.
In response to Altman's comments, podcast host Kevin Roose delivered a pointed retort: "Well thank you for your views and I'll just say it must be really hard when someone does something with your data you don't want them to" 2. This exchange highlighted the irony of an AI company built on data scraping now advocating for user privacy.
The interview also touched on other challenges facing OpenAI, including competition from other tech giants. Altman revealed that Mark Zuckerberg has been attempting to recruit top talent from OpenAI with compensation packages valued at $100 million 13. When asked about Zuckerberg's belief in superintelligent AI, Lightcap quipped, "I think [Zuckerberg] believes he is superintelligent" 12.
Source: pcgamer
The conversation also addressed OpenAI's evolving relationship with Microsoft, its largest investor. Reports have indicated increased tensions between the two companies as they negotiate a new contract and compete in various domains 13. Altman acknowledged these challenges, stating, "In any deep partnership, there are points of tension and we certainly have those" 1.
Towards the end of the interview, the discussion shifted to the broader implications of AI technology, particularly its impact on mental health. Newton raised concerns about mentally unstable individuals using ChatGPT in potentially harmful ways 123.
Altman acknowledged the challenge, stating, "We don't want to slide into the mistakes that I think the previous generation of tech companies made by not reacting quickly enough" 1. However, he admitted that for users "on the edge of a psychotic break, we haven't yet figured out how a warning gets through" 12.
The confrontational nature of the interview may have been influenced by recent legal developments favoring AI companies. Earlier in the week, OpenAI competitor Anthropic won a significant legal battle against publishers 13. A federal judge ruled that Anthropic's use of books to train its AI models was legal under certain circumstances, potentially setting a precedent for similar lawsuits against OpenAI, Google, and Meta 3.
As AI technology continues to advance and integrate into various aspects of society, the tension between innovation, privacy, and ethical concerns remains at the forefront of public discourse. The confrontation between OpenAI executives and New York Times journalists underscores the complex challenges facing the AI industry as it navigates legal, ethical, and societal implications of its rapidly evolving technology.
Summarized by
Navi
[3]
Google's AI-generated summaries in search results have sparked an EU antitrust complaint from independent publishers, citing harm to traffic, readership, and revenue.
5 Sources
Policy and Regulation
12 hrs ago
5 Sources
Policy and Regulation
12 hrs ago
An Xbox executive's suggestion to use AI tools for emotional support and career guidance following Microsoft's layoffs has sparked controversy and criticism within the gaming industry.
5 Sources
Technology
12 hrs ago
5 Sources
Technology
12 hrs ago
Billionaire Mark Cuban forecasts that AI's untapped potential could lead to unprecedented wealth creation, possibly producing the world's first trillionaire from an unexpected source.
2 Sources
Technology
12 hrs ago
2 Sources
Technology
12 hrs ago
Meta's aggressive AI talent recruitment efforts, including reports of massive bonuses, have been called into question by a former OpenAI researcher who joined the company.
2 Sources
Business and Economy
12 hrs ago
2 Sources
Business and Economy
12 hrs ago
The US plans to restrict AI chip exports to Malaysia and Thailand to prevent China from accessing advanced processors through intermediaries, as part of its "AI Diffusion" policy.
2 Sources
Policy and Regulation
4 hrs ago
2 Sources
Policy and Regulation
4 hrs ago