Curated by THEOUTPOST
On Sat, 27 Jul, 12:05 AM UTC
2 Sources
[1]
Microsoft, OpenAI Chase Google in AI Search as Senate Passes AI Deepfakes Bill
Member of the board, UCLA Daily Bruin Alumni Network; advisory board, Center for Ethical Leadership in the Media Ask anyone who has more than passing knowledge of how generative AI chatbots, like ChatGPT, work and they'll tell you that you shouldn't be using these tools as you would a search engine that can point you to original sources and lists of articles, websites, videos and images related to the topic you're seeking out. That's because chatbots are more akin to summarization engines -- they use the vast amounts of training data they've slurped up from wherever to provide you with an answer to a question or prompt (with the caveat that their training data might not include the latest and/or most reliable sources and that some or all of the answer they provide might be made up since chatbots have a problem with hallucinations.) Those limitations aside, AI companies think users will be most interested in a service that not only provides AI-generated answers to your prompts, but also works like search engines by providing links to the hopefully relevant and reliable sources those answers are based on. That's the reason Google, in May, made a big splash about how it's building AI features into all its services, including adding AI-powered summaries into its popular, profitable and market dominant search engine. It's also the reason why last week Microsoft announced an experiment with AI summaries in its Bing search engine and why OpenAI, maker of ChatGPT, unveiled a prototype called SearchGPT that it said aims to answer questions based on more timely information and will now include links to relevant sources. After adding gen AI-powered chat answers in Bing last February, Microsoft announced a new experience called Bing generative search that it says uses gen AI technology to create results pages that provide summarized answers that are "easy to read and understand, with links and sources that show where it came from," according to a company blog post. Regular search results will continue to be "prominently displayed on the page like always," Microsoft added. "By combining the power of generative AI and large language models (LLMs) with the search results page, Bing's generative search creates a bespoke and dynamic response to a user's query," Microsoft says. As an example, if you search for "What is a spaghetti western?" with Bing, you'll now get an "AI-generated experience that dives into the film subgenre, including its history and origins, top examples and more." Just one thing: You might not get that AI-based bespoke experience since they will only be available to a "small percentage" of user query results at first, CNET's Ian Sherr reports. OpenAI's SearchGPT will also only be available on a limited basis. You'll need to sign up for the wait list for the product, which the company eventually expects to tie into ChatGPT, CNET's Gael Cooper reports. OpenAI said SearchGPT, which it calls a "temporary prototype," will be limited to 10,000 users and publishers at launch. "Getting answers on the web can take a lot of effort, often requiring multiple attempts to get relevant results," OpenAI said in a blog post. "By enhancing the conversational capabilities of our models with real-time information from the web, finding what you're looking for can be faster and easier." The company says its "testing" SearchGPT and will be working with publishers to get feedback. Reminder: some publishers have agreed to license their content to Open AI and help train its LLM, while other publishers, including The New York Times, have sued the San Francisco startup for slurping up their copyrighted stories without permission, compensation or attribution in ChatGPT's answers. OpenAI tacitly acknowledged the attribution issue, saying in its blog post that it's working with "publishers by prominently citing and linking to them in searches. Responses have clear, in-line, named attribution and links so users know where information is coming from and can quickly engage with even more results in a sidebar with source links." Google, meanwhile, isn't just standing still as its tech rivals come after its cash cow. Shortly after debuting summaries called AI Overviews on its search results page, Google scaled back the feature after users started getting misleading, incorrect and flat out bizarre answers from the gen AI search tool (In response to a question about daily rock intake, it recommended we eat "at least one small rock per day." The source of that advice: a 2021 story in The Onion). Last week Google announced a few updates to its gen AI service, Gemini, including offering links to related content within Gemini's responses. "We've also heard that people enjoy exploring new topics with Gemini as their go-to research partner. So we're now displaying links to related content for fact-seeking prompts in Gemini -- continuing our work to reduce hallucinations and making it easier to explore websites that help you learn more," the company said in a blog post. To help tackle the hallucination problem -- which Google defines in part as a problem with the training data (though it won't tell us what's in its training data) -- the company also announced double-check feature in Gemini that "verifies responses by using Google Search to highlight which statements are corroborated or contradicted on the web. For information that is contradicted, you can easily dive deeper to verify the information yourself." What are the takeaways from the AI search war, so far? Well, there's the simple fact that gen AI is going to become a bigger part of the tech services and tools we use every day, whether you want to get AI-generated summaries or not. And while we'll start seeing attribution, in the form of links, to the sources of information these AI services are citing, we may need to fact check those sources more than we did in the good old days of search to now determine if the information is reputable and fact-based given the hallucination problem. Or not. Either way, the search for new ways to add AI to search is on. Here are the other doings in AI worth your attention. Sam Altman, CEO of OpenAI, penned an essay in The Washington Post last week urging the US to lead a global coalition of democratic societies that will work together on policies and investments to counter the threat from authoritarian regimes, including China, who are touting a different, darker vision for how AI may be deployed. "Who will control the future of AI?" Altman writes. "Will it be one in which the United States and allied nations advance a global AI that spreads the technology's benefits and opens access to it, or an authoritarian one, in which nations or movements that don't share our values use AI to cement and expand their power?" Altman says the public and technology sectors in the US and like-minded countries need to get "four big things right to ensure the creation of a world shaped by a democratic vision for AI." His call for action comes after China showed off technology at the World Artificial Intelligence Conference in Shanghai this month that "rival American systems" and is already in the hands of consumers and developers, according to The New York Times. Those four things include crafting "robust security measures" including cyber defense and data security innovations to "prevent hackers from stealing key intellectual property such as model weights and AI training data." He also calls on the US to invest more in infrastructure, including broadband networks, data centers and power plants to run AI systems, and to develop commercial policies and "rules of the road for what sorts of chips, AI training data and other code -- some of which is so sensitive that it may need to be remain in the United States -- can be housed in the data centers that countries around the world are racing to build to localize AI information." Last, Altman asks that the US think "creatively" about new models for developing and deploying AI, including making sure there's a role in AI development for the "global south and other nations who have been historically left behind." There's a lot more in his essay, with Altman telling Axios in an interview that he's not seeing the "level of seriousness" and sense of urgency from Washington or states. "The future continues to come at us fast," he told the news group. I'm sure we can all agree these issues merit serious discussion and that the US has an important role to play in the future of AI, given the investments and innovation coming from US companies like OpenAI, Google, Meta, Microsoft, Apple and Anthropic. But I hope Altman will also answer questions about how OpenAI is handling issues around AI safety and security, given the exit of some high-profile executives working on AI risks and safety and reports by CNBC and The Information that the company's top safety executive was reassigned to a job focused on AI reasoning instead of focusing on a team that was "tasked with tracking, evaluating, forecasting and helping protect against catastrophic risks" related to AI. Five US senators -- four Democrats and one Republican -- also want Altman to answer questions about how OpenAI is handling safety, according to a July 22 letter sent to him and obtained by The Washington Post. Given the staff exits, and a letter penned by current and former employees of OpenAI and other AI companies warning of an AI apocalypse, the group asked that Altman provide information "about the steps that the company is taking to meet its public commitments on safety, how the company is internally evaluating its progress on those commitments, and on the company's identification and mitigation of cybersecurity threats." The senators have asked OpenAI to respond to 12 questions by Aug. 13. "Given OpenAI's position as a leading AI company, it is important that the public can trust in the safety and security of its systems. This includes the integrity of the company's governance structure and safety testing, its employment practices, its fidelity to its public promises and mission, and its cybersecurity policies." Stay tuned. The Defiance Act, a federal bill that will "allow victims of nonconsensual sexually explicit deepfakes to sue people who create, share and receive them has unanimously passed the Senate and now moves to the House for a vote," NBC News reported last week. The Defiance Act, which stands for Disrupt Explicit Forged Images and Non-Consensual Edits (Defiance) Act of 2024, is the first nationwide protection for "survivors of nonconsensual deepfake pornography," Rep. Alexandria Ocasio-Cortez, who is sponsoring the bill in the House of Representatives and who has been a victim of deepfake images, said in a statement. "Over 90% of all deepfake videos made are nonconsensual sexually explicit images, and women are the targets 9 times out of 10," she said. NBC noted that the production of nonconsensual sexually explicit deepfake has "skyrocketed" in the past year. While female celebrities, influencers and public figures were among the first targeted, including some horrific deepfake photos of singer Taylor Swift, "cases have also sprung up at middle and high schools around the world, with teen girls frequently being victimized by their male classmates," NBC noted. In case you don't know, deepfakes, added NBC, "typically refer to digitally manipulated images that falsely depict someone saying or doing something. They often take the form of sexually explicit photos and videos spread online. The material frequently merges a victim's face with a body in a pornographic video." Senate Judiciary Chair Dick Durbin, who introduced the bill along with Senator Lindsey Graham, said in a post on X after the vote that "Current laws don't apply to deepfakes, leaving women and girls who suffer from this image-based sexual abuse without a legal remedy. It's time to give victims their day in court and the tools they need to fight back." You don't need to have a degree in computer science to go after one of the new jobs expected to be created by the AI boom, writes CNET's Lisa Lacy. She noted that from December 2022 to September 2023, applications for AI jobs in the US surged 19% on LinkedIn. While there definitely will be opportunities for AI software engineers to build and protect AI systems, companies are also seeking out AI ethicists, people who's job it is to evaluate "AI's legal, moral and social implications to implement guidelines for AI's development and use in an organization, as well as its potential impact on the outside world," she writes. As for jobs that will be the most affected, bank tellers, travel agents, customer service reps, writers (ouch) and marketers may be among the most displaced as companies turn to AI tools to handle that work. No one knows for sure how jobs of the future will be affected by AI, so Lacy offers up a quick history lesson on how the evolution of the fashion industry served as a driver for new kinds of jobs. "Prior to the Industrial Revolution, people typically had two outfits -- one for work and one for church. However, following the advent of industrial spinning and fabric making after the Civil War, clothing started to become about more than just staying warm. It became more common for individuals to have closets full of clothes, which led to variety, which spurred style and creativity, which helped develop fields like design, advertising, modeling and magazine publishing." When it comes to AI, I'm hoping we'll see a resurgence in jobs for fact checkers. ChatGPT backs me up here, telling me that "the adoption of AI technology does have the potential to lead to a resurgence in jobs for fact checkers. While AI can automate many tasks and processes, there are limitations to what AI can effectively fact-check, especially when it comes to context, nuance, and subjective content." (As a reminder, this column was entirely researched, written, copy edited and fact checked by humans.) While AI has the potential to boost human creativity and free us from the drudgery of repetitive tasks, the AI industry continues to cope with some serious growing pains. Here are three items that caught my eye this past week. AIs are mathematically challenged. While gen AI chatbots and tools have shown some prowess in writing poetry, wedding vows, thank you notes, report summaries and answers to your questions (or prompts), they have a hard time doing math, reports The New York Times. The reason: AI chatbots -- which have been described as autocomplete on steroids because they provide answers based on identifying what they see as patterns in billions of bits of training data -- weren't really designed to do math. "They are fine tuned for determining probabilities, not doing rules-based calculations," writes NYT reporter Steve Lohr. "Likelihood is not accuracy, and language is more flexible, and forgiving, than math." The takeaway, he adds, may just be that "the world's smartest computer scientists, it seems, have created artificial intelligence that is more liberal arts major than numbers whiz." Video game actors to strike. A year after Hollywood writers and actors ended their strikes with studios and the entertainment industry by wresting some concessions related to the rollout of AI, video game voice actors and animations artists are "going on strike for the first time since 2017" after nearly two years of negotiations "with Activision, Epic Games, and other big publishers and studios over higher pay, better safety measures, and protections from new generative AI technologies," reports Kotaku.com. According to the Associated Press, the Screen Actors Guild-American Federation of Television and Radio Artists said that while there has been progress on negotiations over wages and job safety, studios won't negotiate on gen AI in video game contracts. "Without guardrails, game companies could train AI to replicate an actor's voice, or create a digital replica of their likeness without consent or fair compensation," the union told The AP. The video game contract covers over 2,500 "off-camera (voiceover) performers, on-camera (motion capture, stunt) performers, stunt coordinators, singers, dancers, puppeteers, and background performers," The AP said, citing Sag-Aftra. Blocking bots is not a copyright fix. The CEO of iFIxit, a site that offers repair information on "hundreds of devices and thousands of parts," tweeted last week that a web scraper from Anthropic, the AI company behind Claude, had "hit" iFixit servers "a million times in 24 hours. "You're not only taking our content without paying," wrote CEO Kyle Wiens, "you're tying up our devops resources. Not cool." Wiens noted that in iFixit's terms of service use of the company's content for training an AI LLM, like Claude, is "expressly forbidden." He added that if Anthropic wanted to "have a conversation about licensing our content for commercial use, we're right here." 404 Media asked Anthropic about the iFixit scraping and was referred to a blog post that it's up to website owners to "specifically block Anthropic's web crawler, called ClaudeBot." Anthropic basically says in its post that anything made public on the internet is up for grabs "as per industry standard... Our crawling should not be intrusive or disruptive. We aim for minimal disruption by being thoughtful about how quickly we crawl the same domains and respecting Crawl-delay where appropriate." It then offers up code that website owners, like publishers, can use to "limit crawling activity." I don't know about you, but I think hitting a website a million times in a day is intrusive and disruptive. And I don't know who decided the industry standard is that you have to protect yourself against content scrapers rather than assume they can't just take your content (FYI, CNET blocks bots that scrape our content to train their LLMs.) I'll give the final word to 404 Media. "Across the board, AI companies almost never respect terms of service, which is interesting because many of them have very long terms of service agreements themselves that sometimes restrict what users can do. In a paper published last week...researchers at the Data Provenance Institute found that many websites have requested that their content not be scraped in their terms of service, but that often does not do anything."
[2]
New AI models flood the market even as AI takes fire from regulators, actors and researchers - SiliconANGLE
New AI models flood the market even as AI takes fire from regulators, actors and researchers Vice President Kamala Harris' presumed ascension to the Democratic presidential nomination following President Joe Biden's withdrawal scrambled this week certainly scrambled the race, but it also set everyone in tech wondering what a Harris administration might mean. Bottom line: Nobody knows -- and of course there's no guarantee she will win the election come November, despite some reenergized supporters in the industry. But it's likely that there will be some changes in tech policy if she wins. She seems likely to take a middle ground in artificial intelligence, which is to say more or less in line with Biden's policies. On other key issues, it's not so clear, even if her California and Bay Area bona fides suggests she is more amenable to policies that encourage innovation. No doubt we'll hear a lot more very soon. It was another huge week for generative AI, as the march of AI models accelerated thanks to new ones from, well, just about everyone in the market. Big fundings also continued with more than a half-billion each to Cohere and even a Chinese company, Baichuan. Moreover, AI is boosting earnings of everything from chipmakers such as Texas Instruments and SK Hynix to enterprise stalwarts such as IBM and SAP, but it didn't help Alphabet enough given slowing ad growth. Speaking of IPOs, OneStream has a successful one this week, but will it be enough to restart offerings? In any case, next week will provide a huge read on tech sentiment going forward, with earnings reports due from Microsoft, AMD, Samsung, Arm, Qualcomm, Meta, Intel, Amazon, Apple, Cloudflare and more. And next Tuesday and Wednesday, July 30 and 31, we'll be running our own editorial event, Supercloud 7: Get Ready for the Next Data Platform. Free online and on-demand afterwards, it will feature high-octane speakers such as Databricks CEO Ali Ghodsi, Snowflake co-founder and President of Products Benoit Dageville, data mesh guru and Nextdata CEO Zhamak Dehghani, VAST Data CEO Renen Hallak and many more from Google, Microsoft, Walmart and a raft of startups. Register here. SiliconANGLE and theCUBE Research analysts John Furrier and Dave Vellante discuss this and other news in more detail on theCUBE Pod, now out on YouTube. And don't miss Vellante's weekly deep dive, Breaking Analysis, out this weekend. Here's the big news of the week from SiliconANGLE and beyond: Alibaba-backed Chinese AI startup Baichuan raises $691M AI model developer startup Cohere raises $500M at $5.5B valuation IntelePeer raises $140M to automate customer service with generative AI AI call center startup Level AI raises $39.4M to boost innovation Lakera AI raises $20M to ward off malicious prompts on generative AI models Splight raises $12M to unlock the potential of renewable energy sources with AI AI trust startup Vijil raises $6M to prevent AI agents saying the wrong things Chekable bags funding from NEC to streamline patent applications with generative AI New services, and a whole lot of new models: OpenAI debuts AI-powered SearchGPT search engine Google DeepMind unveils AI models for solving advanced mathematical problems Meta introduces Llama 3.1, its biggest and best open-source AI model to date Google's free Gemini chatbot gets 1.5 Flash update, making responses faster and smarter Nvidia works with Accenture to pioneer custom Llama large language models Stability AI introduces Stable Video 4D, its new AI model for 3D video generation Microsoft debuts AI-powered summaries in Bing search engine Researchers develop new technique for squeezing full-fat AI models into PCs and smartphones AWS launches generative AI-powered feature for Connect Contact Lens to help agents in call centers Iterative debuts DataChain for curating and processing unstructured data with AI models Dialpad releases enhanced 'Ai Sales' platform to streamline sales processes Researchers find that AI-generated web content could make LLMs less accurate Reddit blocks Bing, several other search engines from indexing its platform on AI training concerns In latest AI training drama, Runway accused of using publicly available YouTube videos Video game actors to go on strike following breakdown in AI talks with game developers FTC will launch probe into AI-driven surveillance pricing There's more AI and big data news on SiliconANGLE Alphabet beats the Street but slowing ad growth drags on stock price Increased AI bookings help IBM deliver solid earnings and revenue beat, pushing its stock higher SAP posts solid results and raises profit target for 2025 NXP Semiconductor's stock falls on weak outlook Tesla shares drop 6% on mixed results as price-cutting drives down automotive revenue Texas Instruments stock rises after it posts earnings beat SK Hynix second-quarter profit surges on AI chip sales (per the Wall Street Journal) Juniper disappoints with earnings that fall well short of forecasts OneStream shares pop 34% on Nasdaq debut in positive sign for tech IPOs Legal software startups Clio and Harvey raise nine-figure funding rounds QA Wolf raises $36M in fresh funding to ease application reliability testing Intel says patch is coming to fix processor instability problem plaguing gamers for months Putting pressure on Google, Apple Maps is now available on the web Wiz reportedly ends $23B acquisition talks with Google Compliance automation and trust management startup Vanta raises $150M Chainguard nabs $140M to secure enterprise applications' open-source components Unified security remediation platform startup Dazz raises $50M to enhance capabilities Linx Security raises $33M for its identity security platform Heeler Security raises $8.5M to advance real-time application security platform AI-powered cloud risk resolution startup ZEST Security launches with $5M in funding CrowdStrike reveals cause of faulty update that led to Windows crashes But at least people got a $10 Uber Eats gift card out of it... 🙄 Microsoft reveals only 8.5M Windows computers were affected by CrowdStrike outage US grand jury indicts North Korean hacker for role in Andariel cyberattacks Stolen internal documents from Pentagon contractor Leidos leaked online NetRise study warns that network equipment vulnerabilities far exceed previous estimates Cisco Talos: Business email compromise and ransomware topped threats in second quarter Now-patched Telegram for Android vulnerability exposed users to malicious videos New PlugX RAT campaign distributed through USB drives targets Steam users New Nozomi Networks security sensor provides extended visibility into industrial control systems New Secure Code Warrior agent enhances secure-by-design approach for application security teams More cybersecurity news here Google no longer plans to end support for third-party cookies in Chrome EU finds that Meta's ad-free tier may be in breach of consumer protection rules White House task force releases first-of-its-kind online safety report aimed at protecting kids EU reportedly set to fine Meta over allegedly anticompetitive Marketplace Facebook integration Electric autonomous tractor company Monarch Tractor raises $133M Warehouse robotics automation firm Mytra launches with $78M to change how materials move Caldera raises $15M to help developers deploy their own scaling for blockchain networks Data security firm Bedrock Security has a new CEO: Bruno Kurtic, co-founder and former chief strategy officer of Sumo Logic. July 30: Supercloud 7: Get Ready for the Next Data Platform: Our latest Supercloud editorial event will look at the foundation for the new AI era, with lots of executives, practitioners, analysts and entrepreneurs from startups to established companies. Here are some of the topics we'll be exploring. And get up to speed with our special section ahead of the event. July 29: F5 Networks July 30: Microsoft, AMD, Samsung, Arista, Informatica, Commvault, Freshworks and PayPal July 31: Arm, Qualcomm, Meta, Tenable and Western Digital Aug. 1: Intel, Amazon, Apple, Cloudflare, Mobileye, Twilio, Altair, Block, Coinbase and Snap
Share
Share
Copy Link
As tech giants race to integrate AI into search engines, the US Senate passes a bill on AI deepfakes. Meanwhile, new AI models flood the market amid growing concerns from regulators, actors, and researchers.
In a rapidly evolving technological landscape, Microsoft and OpenAI are intensifying their efforts to challenge Google's dominance in the search engine market. The focus of this competition is the integration of advanced artificial intelligence capabilities into search functionalities. Microsoft's Bing search engine, powered by OpenAI's ChatGPT technology, is at the forefront of this AI-driven transformation 1.
Google, not to be outdone, has responded with its own AI chatbot, Bard, and is incorporating AI features into its search results. This AI arms race is reshaping the way users interact with search engines, promising more intuitive and conversational search experiences.
As AI technology advances, concerns about its potential misuse have prompted legislative action. The US Senate has passed a bill addressing the issue of AI-generated deepfakes 1. This legislation aims to combat the spread of misleading and potentially harmful AI-created content, highlighting the growing need for regulatory frameworks in the AI space.
The AI industry is experiencing a surge in new model releases, with various companies and organizations introducing their own AI solutions. This flood of new models is diversifying the AI landscape and offering users a wider range of options for different applications 2.
As AI technology continues to advance and proliferate, it faces increasing scrutiny from various stakeholders:
Regulators: Government bodies are grappling with the need to establish guidelines and regulations for AI development and deployment.
Actors: The entertainment industry, particularly actors, have raised concerns about the potential impact of AI on their profession, including issues related to likeness rights and job security.
Researchers: AI experts and academics are calling for responsible development practices and emphasizing the importance of addressing potential risks associated with advanced AI systems 2.
The rapid advancements in AI technology present both opportunities and challenges. While companies like Microsoft, OpenAI, and Google push the boundaries of what's possible with AI-powered search and other applications, there is a growing recognition of the need for responsible development and deployment.
The passage of the AI deepfakes bill by the US Senate underscores the importance of addressing potential misuse of AI technology. As the AI landscape continues to evolve, finding the right balance between innovation and regulation will be crucial for harnessing the benefits of AI while mitigating its risks.
As AI development accelerates, companies face rising costs in data labeling. Meanwhile, a new trend emerges with Not-Large Language Models, offering efficient alternatives to their larger counterparts.
2 Sources
2 Sources
As AI technology advances, concerns grow over its environmental impact. Meanwhile, the tech industry, led by AWS, pushes for AI adoption in enterprises and chip manufacturing.
2 Sources
2 Sources
A comprehensive look at the latest developments in AI, including OpenAI's internal struggles, regulatory efforts, new model releases, ethical concerns, and the technology's impact on Wall Street.
6 Sources
6 Sources
Meta expands Llama AI model usage to U.S. military and defense contractors, sparking debate over open-source AI and national security implications.
8 Sources
8 Sources
OpenAI secures a historic $6 billion in funding, valuing the company at $157 billion. This massive investment comes amid concerns about AI safety, regulation, and the company's ability to deliver on its ambitious promises.
7 Sources
7 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved