Curated by THEOUTPOST
On Tue, 18 Feb, 12:03 AM UTC
5 Sources
[1]
New York Times Encourages Staff to Create Headlines Using AI
The so-called "paper of record" is now encouraging staff to use generative AI tools to write headlines and summarize articles. As Semafor reports, the New York Times recently informed employees that they now have a whole suite of AI tools at their disposal to write search headlines -- the version of headlines that appear on search engines like Google -- as well as code, social copy, quizzes, and more. Along with models from Google, Github, and Amazon, NYT staff will also have access to Echo, a bespoke tool currently in beta that's designed to condense articles into shorter summaries. It's unclear whether these tools are the same ones that the paper was experimenting with last year, when leaked data revealed that the NYT was already using AI to write headlines. "Generative AI can assist our journalists in uncovering the truth and helping more people understand the world," the newspaper's new editorial guidelines for AI read, per internal documentation shared with Semafor. Those new guidelines -- which the paper refused to comment on when Semafor's Max Tani asked for on-record confirmation -- suggested that employees use the new suite of AI tools to make articles "tighter," write promoted social media posts, and summarize articles "in a concise, conversational voice" for newsletters. Despite those example use cases, and others shared with staff by the company, the guidelines also warn employees not to employ generative AI in article writing or revision, or when inputting copyrighted material from outside sources. Employees are also barred from using the technology to get around paywalls. For months, a small internal pilot group of journalists, designers, and machine-learning experts have been, per a May 2024 announcement, "charged with leveraging generative artificial intelligence" in the NYT newsroom. As Tani notes, this new suite of tools is the result of that effort. News of these updated AI guidelines and the introduction of this suite of tools also comes more than a year after the paper announced that it was suing OpenAI and Microsoft for copyright infringement -- a claim that the larger tech firm scoffed at in counter-filings. Though that company's non-ChatGPT API will also be accessible to NYT staffers, they will only be able to use it with approval from the newspaper's legal department. Despite the cheery announcement, some of the paper's staff are less than thrilled about the higher-ups' full-throated endorsement of the technology. Employees at the NYT told Semafor that some of their colleagues may be reticent to use the technology because they're concerned it might inspire laziness or a lack of creativity -- or, perhaps more importantly, result in the sort of inaccuracies that generative AI has become known for. From the outside, it certainly sets an unsettling precedent for such a prestigious paper to be embracing AI -- nevermind the bizarre optics of the NYT leaning into the tech while it's still locked in a legal battle with OpenAI.
[2]
The New York Times has greenlit AI tools for product and edit staff | TechCrunch
The New York Times is now allowing its product and editorial teams to use AI tools, which might one day write social copy, SEO headlines, and code, reports Semafor. The news came to staff via an email, in which the publication announced the debut of its new internal AI summary tool called Echo. The New York Times also shared a suite of AI products that staff could use to build web products or develop editorial ideas, alongside editorial guidelines for using AI tools. The paper's editorial staff is encouraged to use AI tools to suggest edits, brainstorm interview questions, and help with research. At the same time, staff was warned not to use AI to draft or significantly revise an article or input confidential source information. Those guidelines also suggest the Times might use AI to implement digitally voiced articles and translations into other languages. Semafor reports that The Times said it would approve AI programs like GitHub Copilot programming assistant for coding, Google's Vertex AI for product development, NotebookLM, some Amazon AI products, and OpenAI's non-ChatGPT API through a business account. The New York Times's embrace of AI tools comes as it is still embroiled in a lawsuit against OpenAI and Microsoft for allegedly violating copyright law by training generative AI on the publisher's content.
[3]
The New York Times is suing AI companies -- while quietly adopting AI tools
The New York Times has announced plans to implement AI tools for its editorial and product staff, stating that these internal tools could ultimately assist in generating social media copy, SEO headlines, and some coding tasks, reports Semafor. This initiative follows an email communication sent to newsroom staff, revealing the launch of a new in-house AI tool named Echo. The Times indicated that it is opening up AI training for newsroom personnel and shared a suite of AI applications available for staff use, which include GitHub Copilot for coding, Google's Vertex AI for product development, NotebookLM, NYT's ChatExplorer, various Amazon AI products, and OpenAI's non-ChatGPT API -- though the latter requires approval from the company's legal department. Additionally, the Echo tool has been designed to allow journalists to summarize Times articles, briefings, and interactives. OpenAI reportedly deleted evidence in NY Times copyright lawsuit Editorial staff are encouraged to utilize these AI tools for a variety of tasks including creating SEO headlines, summaries, audience promotions, suggesting edits, brainstorming questions, and conducting research on the Times' own documents and images. A mandatory training video shared with staff proposed using AI to generate interview questions for startup CEOs and suggested possible uses for developing news quizzes, social media posts, quote cards, and FAQs. The editorial guidelines shared with staff included specific prompts, such as: "How many times was Al mentioned in these episodes of Hard Fork?" and "Can you summarize this federal government report in layman's terms?" However, the Times emphasized certain restrictions on AI usage, warning staff against drafting or significantly revising articles with AI, entering third-party copyrighted materials, bypassing paywalls, or publishing machine-generated images or videos without proper labeling and context. Despite the enthusiasm expressed by the company about the potential benefits of generative AI, such as improving accessibility through features like digitally voiced articles and translations, some employees voiced skepticism. Concerns included the potential for AI to prompt lazy or unoriginal content, as well as fears it might generate inaccuracies. Tensions remain between AI companies and some Times staff, especially after comments from the CEO of the AI company Perplexity, who suggested using AI tools to replace workers during a previous strike by tech employees at the Times. Currently, the New York Times is embroiled in a legal dispute with OpenAI, accusing the company of unauthorized use of its content for training purposes, which the Times claims constitutes significant copyright infringement. Microsoft, OpenAI's largest investor, has publicly stated that the Times is attempting to hinder technological innovation.
[4]
The New York Times approves AI tools to assist journalists
In a nutshell: The New York Times is giving its editorial and product staff the green light to use select generative AI tools to enhance their work and make their jobs easier. However, just because the tools are available doesn't mean they will be adopted en masse. In documents and videos seen by Semafor, The Times outlined how staffers should and shouldn't use artificial intelligence. For example, employees are encouraged to use tools like GitHub Copilot for code creation, Google Vertex AI to help with product development, and certain AI tools from Amazon to craft quizzes, social copy, and FAQ entries. NYT journalists are also permitted to use AI to help tighten up paragraphs, create summaries of articles for inclusion in newsletters, suggest edits, and brainstorm search-optimized headlines. The publication even created its own AI-based summarization tool, Echo, to help condense content. The guidelines note that the publication views AI not as a magical solution but, like previous advances, as a powerful tool to be used in service of their mission. Language translation and digitally voiced articles could make The Times more accessible than ever and in the future, generative AI may even be used in ways we haven't yet conceived. The Times has installed guardrails to help prevent misuse. In addition to a mandatory training video, staffers are prohibited from using AI tools to draft or significantly revise articles. What's more, staff members aren't allowed to use AI-generated images or videos in stories and should watch for suggestions that could inadvertently reveal protected sources. Not everyone is sold on the concept. According to Semafor, some employees expressed concern that using AI could inspire laziness or generate inaccurate information that would hinder the creative process. As such, The Times doesn't expect universal adoption out of the gate. Generative AI in journalism can be a slippery slope, but it should be noted that The Times' guidelines are in line with standard industry practices and largely mirror our own ethics policy on the subject.
[5]
The New York Times adopts AI tools in the newsroom
Staff were reportedly sent new editorial guidelines detailing permitted uses for Echo and other AI tools, which encourage newsroom employees to use them to suggest edits and revisions for their work, and generate summaries, promotional copy for social media, and SEO headlines. Other examples mentioned in a mandatory training video shared with staff include using AI to develop news quizzes, quote cards, and FAQs, or suggesting what questions reporters should ask a start-up's CEO during an interview. There are restrictions, however -- the company told editorial staff that AI shouldn't be used to draft or significantly alter an article, circumvent paywalls, input third-party copyrighted materials, or publish AI-generated images or videos without explicit labeling.
Share
Share
Copy Link
The New York Times introduces AI tools for its editorial and product staff, sparking discussions about the role of AI in journalism and raising questions about the newspaper's ongoing lawsuit against OpenAI.
The New York Times, one of the world's most prestigious newspapers, has taken a significant step towards integrating artificial intelligence into its newsroom operations. According to recent reports, the publication has introduced a suite of AI tools for its editorial and product staff, marking a new era in the intersection of journalism and technology 12.
At the heart of this initiative is Echo, a proprietary AI tool developed by the Times. Echo is designed to condense articles into shorter summaries, potentially revolutionizing how content is presented to readers 1. In addition to Echo, the Times is providing access to various other AI tools, including:
The Times has outlined specific guidelines for the use of these AI tools. Staff are encouraged to use them for:
However, the guidelines strictly prohibit using AI to draft or significantly revise articles, input copyrighted material from outside sources, or bypass paywalls 15.
The adoption of AI tools is seen as a way to enhance the Times' journalistic capabilities. Potential future applications include:
Despite the enthusiasm from management, some staff members have expressed concerns. These include:
Interestingly, this embrace of AI technology comes while the New York Times is engaged in a lawsuit against OpenAI and Microsoft. The newspaper accuses these companies of copyright infringement, alleging unauthorized use of its content for AI training 23. This situation has created a paradoxical stance, with the Times adopting AI tools while simultaneously fighting against certain AI practices in court.
The New York Times' move to incorporate AI into its newsroom operations could set a precedent for the journalism industry. As one of the most influential newspapers globally, its approach to AI integration may inspire other publications to follow suit, potentially reshaping the landscape of modern journalism 45.
Reference
[5]
Quartz, owned by G/O Media, has been publishing AI-generated news articles, sparking debates about accuracy, sourcing, and the future of journalism.
2 Sources
2 Sources
A federal judge has ruled that The New York Times and other newspapers can continue their copyright lawsuit against OpenAI and Microsoft, alleging unauthorized use of their content to train AI chatbots. The case could have significant implications for both the news industry and AI companies.
10 Sources
10 Sources
A new report reveals how news audiences and journalists feel about the use of generative AI in newsrooms, highlighting concerns about transparency, accuracy, and ethical implications.
3 Sources
3 Sources
As AI-powered search transforms the media landscape, newsrooms are adopting new strategies to stay relevant. From pivoting to reader-revenue models to leveraging AI for support tasks, media outlets are finding innovative ways to engage audiences and maintain their relevance in a rapidly changing digital environment.
2 Sources
2 Sources
New research reveals that major AI companies like OpenAI, Google, and Meta prioritize high-quality content from premium publishers to train their large language models, sparking debates over copyright and compensation.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved