Curated by THEOUTPOST
On Tue, 17 Sept, 12:04 AM UTC
7 Sources
[1]
Omnipresent AI cameras will ensure good behavior, says Larry Ellison
"We're going to have supervision," says billionaire Oracle co-founder Ellison. On Thursday, Oracle co-founder Larry Ellison shared his vision for an AI-powered surveillance future during a company financial meeting, reports Business Insider. During an investor Q&A, Ellison described a world where artificial intelligence systems would constantly monitor citizens through an extensive network of cameras and drones, stating this would ensure both police and citizens don't break the law. Further Reading Ellison, who briefly became the world's second-wealthiest person last week when his net worth surpassed Jeff Bezos' for a short time, outlined a scenario where AI models would analyze footage from security cameras, police body cams, doorbell cameras, and vehicle dash cams. "Citizens will be on their best behavior because we are constantly recording and reporting everything that's going on," Ellison said, describing what he sees as the benefits from automated oversight from AI and automated alerts for when crime takes place. "We're going to have supervision," he continued. "Every police officer is going to be supervised at all times, and if there's a problem, AI will report the problem and report it to the appropriate person." The 80-year-old billionaire also predicted that AI-controlled drones would replace police vehicles in high-speed pursuits. "You just have a drone follow the car," he explained. "It's very simple in the age of autonomous drones." Ellison co-founded Oracle in 1977 and served as CEO until he stepped down in 2014. He currently serves as the company's executive chairman and CTO. Sounds familiar While Ellison attempted to paint his prediction of universal public surveillance in a positive light, his remarks raise significant questions about privacy, civil liberties, and the potential for abuse in a world of ubiquitous AI monitoring. Ellison's vision bears more than a passing resemblance to the cautionary world portrayed in George Orwell's prescient novel 1984. In Orwell's fiction, the totalitarian government of Oceania uses ubiquitous "telescreens" to monitor citizens constantly, creating a society where privacy no longer exists and independent thought becomes nearly impossible. Further Reading But Orwell's famous phrase "Big Brother is watching you" would take on new meaning in Ellison's tech-driven scenario, where AI systems, rather than human watchers, would serve as the ever-vigilant eyes of authority. Once considered a sci-fi trope, the automated systems are already becoming a reality: Similar automated CCTV surveillance systems have already been trialed in London Underground and at the 2024 Olympics. China has been using automated systems (including AI) to surveil its citizens for years. In 2022, Reuters reported that Chinese firms had developed AI software to sort data collected on residents using a network of surveillance cameras deployed across cities and rural areas as part of China's "sharp eyes" campaign from 2015 to 2020. This "one person, one file" technology reportedly organizes collected data on individual Chinese citizens, leading to what The Economic Times called a "road to digital totalitarianism." Begging for GPUs Ellison's prediction of AI-driven surveillance will rely on the development of powerful hardware, but shortages of AI-acceleration components like GPUs could slow them down. During the same Q&A last week, Ellison recounted a dinner with Elon Musk and NVIDIA CEO Jensen Huang that he characterized as "me and Elon begging Jensen for GPUs," highlighting the intense demand for the chips. Ellison claimed they pleaded with Huang, saying, "Please take our money... we need you to take more of our money." Further Reading Surveillance isn't the only application of AI that Ellison is excited about. As Big Tech companies race to inject AI models into what seems like every conceivable application, whether good or bad, Oracle has been no exception, recently launching several AI initiatives, like a partnership with Musk's SpaceX to bring AI to farming. The billionaire investor predicted during the meeting that over the next five years, companies will invest upwards of $100 billion in building and training AI models, emphasizing the "astronomical" scale of the AI race. Listing image by Benj Edwards / Mike Kemp via Getty Images
[2]
Larry Ellison predicts rise of the modern surveillance state where 'citizens will be on their best behavior'
George Orwell's 1984 warned of a future where Big Brother watches every move. Today, modern technology is making that vision a reality, and Oracle founder Larry Ellison -- the world's second-richest person -- sees a growing opportunity for his company to help authorities analyze real-time data from millions of surveillance cameras. "Citizens will be on their best behavior, because we're constantly recording and reporting everything that is going on," Ellison said in an hour-long Q&A during Oracle's Financial Analyst Meeting last week. Fortune reached out to Oracle for clarification, but officials did not respond. The world Ellison described sounds eerily similar to China's social credit system, which controls citizens' behavior through a network of cameras using some of the world's most advanced facial recognition software to surveil their populace. Nevertheless, his prediction may already be here in a sense. Nowadays, it's commonplace for Americans to whip out their smartphone and film their fellow citizens before uploading it to social media. Some confrontations can go viral, with outraged users often attempting to doxx individuals and even coordinate a pressure campaign to get them fired. This risk may not even be confined to those caught acting poorly in public. Elon Musk, for example, is being sued by Jewish student Ben Brody for defamation after claiming on social media he was, in fact, a neo-Nazi (Musk's attempt to have the case dismissed for reasons of constitutionally protected free speech was overruled by a court in May). In Ellison's view, citizens and state security organs like the police monitoring their every move will, however, be on an equal playing field. He expects law enforcement officers will also be subject to constant surveillance. "Every police officer is going to be supervised at all times, and if there's a problem, AI will report that problem," he said, in comments cited by Tech Crunch. Whether this kind of 24-7 surveillance of everyone and everything means law enforcement will automatically be on their best behavior is, however, debatable. George Floyd's murderer was convicted largely because the former Minneapolis police officer Derek Chauvin was filmed in broad daylight for more than nine minutes as he slowly choked the man to death. While the footage was shot close enough to Chauvin to suggest he was aware he was being filmed, he may not have known. That isn't true, though, for Miami-Dade officer Danny Torres, who was suspended from duty after footage from his own colleagues' bodycams showed he used unnecessary and excessive force when arresting NFL football player Tyreek Hill for speeding. The Miami-Dade police department apologized, saying Torres's actions "clearly do not meet the standard we expect from law enforcement."
[3]
Oracle's Larry Ellison Predicts Police Officers Will Be Supervised 'At All Times' And Citizens Will Be On Their Best Behavior Thanks To Constant AI-Powered Surveillance - Oracle (NYSE:ORCL)
Speaking at an Oracle Corp. ORCL financial analysts meeting, co-founder and CTO of the tech giant, Larry Ellison predicted that artificial intelligence will eventually power extensive law enforcement surveillance systems. What Happened: "Every police officer is going to be supervised at all times, and if there's a problem, AI will report that problem and report it to the appropriate person," said Ellison. He added that citizens would be on their best behavior due to constant recording and reporting. Ellison believes that AI-driven continuous surveillance could significantly reduce crime rates. However, The Washington Post report notes that police data in the U.S. has historical biases, which could lead AI models to suggest higher criminal activity in certain areas, creating racially and socioeconomically biased feedback loops. See Also: Laura Loomer Faces Republican Wrath Over Racist Posts Why It Matters: The conversation around AI's role in law enforcement is part of a broader dialogue on AI's impact on society. Recently, OpenAI launched its 'o1' model, a significant step towards human-like AI, which can answer complex questions faster than a human. This advancement highlights both the potential and the limitations of AI technology. Microsoft Corp. co-founder Bill Gates also discussed AI's implications on the workforce in a podcast episode with OpenAI CEO Sam Altman. Gates questioned whether AI could support blue-collar jobs. Recent studies show that 37% of companies have replaced staff with AI technology, and 44% predict AI will lead to layoffs in 2024. Additionally, AI's influence extends to personal finance, with nearly 40% of Americans using AI for financial planning, according to a recent Ipsos poll. Read Next: Warren Buffett $13B Bet On Occidental Petroleum Turns Sour As Shares Plunge 29% Since Mid-April -- Analyst Says He Could Buy More Photo: Courtesy of Wikimedia This story was generated using Benzinga Neuro and edited by Kaustubh Bagalkote Market News and Data brought to you by Benzinga APIs
[4]
Billionaire Drools That "Citizens Will Be on Their Best Behavior" Under Constant AI Surveillance
"Citizens will be on their best behavior because we are constantly recording and reporting everything that's going on." If it were up to Larry Ellison, the exorbitantly rich cofounder of software outfit Oracle, all of us will soon be smiling for the camera -- constantly. Not for a cheery photograph, but to appease our super-invasive, if not totally omnipresent, algorithmic overseers. As Business Insider reports, the tech centibillionaire glibly predicts that the wonders of AI will bring about a new paradigm of supercharged surveillance, guaranteeing that the proles -- excuse us, "citizens" -- all behave and stay in line. "We're going to have supervision," Ellison said this week at an Oracle financial analysts meeting, per BI. "Every police officer is going to be supervised at all times, and if there's a problem, AI will report that problem and report it to the appropriate person." "Citizens will be on their best behavior," he added, "because we are constantly recording and reporting everything that's going on." Of course, many of these surveillance apparatuses -- security cameras, bodycams -- are already in place. The novel dystopian development would be that AIs would be deployed to monitor these feeds constantly -- which already happens to some extent in experimental forms, but not as pervasively as Ellison envisions -- so those poor, outnumbered Feds at intelligence agencies everywhere have a little backup. Hell, it might even give patrol cops a run for their money, according to Ellison. Why have them engage in a risky car chase, for example, when you can get an AI drone to tail a suspect instead? "You just have a drone follow the car," Ellison said, per BI. "It's very simple in the age of autonomous drones." This is all very rich coming from a guy who's, well, very rich. Depending on whose estimate you go by, Ellison's net worth is north of $200 billion, making him the second wealthiest person in the world behind Elon Musk (another tech bigwig who, it's worth mentioning, is currently profiting off government surveillance). Under Ellison's stewardship, Oracle has been attempting to position itself as another leader in the AI race, and has quickly integrated the tech into its cloud computing services. It's not a stretch to say that the Austin-based corporation will want to be part of that royal "we" Ellison is so fond of using that will oversee the "citizens." Here's something worth noting, though: in 2022, Oracle was sued for running a "worldwide surveillance machine" which was facilitated by allegedly collecting billions of people's personal information and pawning it off to third parties. It settled the case in July, agreeing to pay $115 million. Make of that what you will.
[5]
Larry Ellison's AI-Powered Surveillance Dystopia Is Already Here
"Citizens will be on their best behavior, because we're constantly recording and reporting everything that's going on." There's a comment that's become very popular on social media whenever a new, horrifying surveillance practice is revealed: "1984 was supposed to be a warning, not an instruction manual!" This sentiment has become a bit tiresome, in part because saying it doesn't really mean anything, and our real world has long since surpassed George Orwell's dystopian nightmare in a few ways. But invoking 1984 as warning, not instruction manual feels appropriate here, with Oracle CEO Larry Ellison, the fifth-richest person in the world, pitching his exciting vision for an always-on, 1984-style, AI-powered surveillance fever dream to an audience of investors. In the remarks, which were first reported by Business Insider, Ellison said police body cameras, car cameras, drones, and other cameras will be always on and streaming to Oracle data centers, where AI will constantly be monitoring the feeds. "The police would be on their best behavior, because we're constantly watching and recording everything that's going on," Ellison said. "Citizens will be on their best behavior, because we're constantly recording and reporting everything that's going on. It's unimpeachable. The cars have cameras on them." Ellison's entire remarks are worth reading, because what he is pitching is a comprehensive surveillance apparatus that touches most parts of being in public. More importantly, every idea he is pitching currently exists in some form, and each has massive privacy, bias, legal, or societal issues that have prevented them from being the game-changing technology that somehow makes us all safer. Ellison: "Securing schools: We think we can absolutely lock down schools so that dramatically reduce the case of anyone being on campus that doesn't belong on campus, and immediately alert someone, use AI cameras to immediately recognize that." The idea that schools can be made safe with technology (or armed teachers, or more police) rather than, say, making guns harder to access, has become a cash cow for surveillance tech companies in the age of near-constant school shootings. Many schools have begun to implement AI-powered weapon detectors, which are notoriously inaccurate and which have, for example, detected notebooks as "weapons," missed actual weapons, and led to what one administrator called "the least safe day" because of mass confusion associated with the scanners. Students are also being monitored in the hallways, in the bathroom, on social media, and on their school-issued devices. It's unclear that any of this has made schools any safer. Axon, meanwhile, pitched the idea of taser-equipped drones that patrol schools, an idea that was quickly shelved after widespread public outrage and its own ethics board resigning over the idea. Ellison: "We completely redesigned body cameras. Our body cameras cost $70, normal body camera costs, I don't know, $7,000. Our body cameras are simply lenses, two lenses attached to a vest attached to the smartphone you're wearing. We take the video of the police officer. And the camera is always on. You don't turn it on and off. [A police officer can say], 'Oracle, I need two minutes to take a bathroom break,' and we'll turn it off. The truth is, we don't really turn it off. What we do is, we record it, so no one can see it, so no one can get into that recording without a court order. So you get the privacy you requested, but court order, we will -- a judge will order, that so-called bathroom break. Something comes up, I'm going to lunch with my friends. 'Oracle, I need an hour of privacy for lunch with my friends.' God bless. We won't listen in, unless there's a court order. But we transmit the video back to headquarters, and AI is constantly monitoring video." Ellison is correct that police turning off their body cameras is a well-documented problem. Intuitively, you would also think that body cameras would improve police behavior. But the evidence for this is actually not that strong. Academics have shown that having a body camera on does not make a statistical or consistent difference in police behavior in part because there are many cases of police being filmed committing acts of brutality where the officer in question does not face severe consequences. Public access to body camera footage is also incredibly uneven; public records laws differ in each state about whether that footage can be obtained by journalists and police accountability organizations. Ellison is proposing a situation here where the footage would be held and analyzed not by a public police department but by Oracle and Oracle's AI systems. "We won't listen in, unless there's a court order" means, of course, that it is listening in, and has all sorts of implications for who can access this sort of footage, when, and under what circumstances. Ellison: "Remember this terrible case in Memphis where the five police officers basically beat to death another citizen in Memphis? Well, that can't happen because it'd be on TV at headquarters. Everyone would see it. Your body cameras would be transmitting that. The police would be on their best behavior, because we're constantly watching and recording everything that's going on. Citizens will be on their best behavior, because we're constantly recording and reporting everything that's going on. It's unimpeachable. The cars have cameras on them. I think we have a squad car someplace. Those applications -- we're using AI to monitor the video. So that altercation that occurred in Memphis, the chief of police would be immediately notified. It's not people that are looking at those cameras, it's AI that's looking at the camera, [saying] 'No, no no, you can't do this. That's an event.' An alarm is going to go off and we're going to have supervision. Every police officer is going to be supervised at all times. And if there's a problem, AI will report it to the appropriate person, whether it's the sheriff or the chief or whoever we need to take control of the situation." AI-powered and connected smart cameras are already in use in the United States and across the world. So are automated license plate readers, AI-powered gunshot detecting microphones, connected home security cameras, facial recognition tech, etc. Tesla footage is being subpoenaed and used in police investigations. Crime still exists, and false positives and cases of mistaken identities are common across most of these technologies. Companies like Predpol, Shotspotter, Flock, Fusus, Axon, and many others have been pitching the idea of predictive and instantly-reactive policing as a deterrent for many years. These technologies are anything but "unimpeachable." There have been numerous incidents of Black men being misidentified by facial recognition technology, instances of police automatically responding to a supposed gunshot that was not a gunshot, and cases of over-policing for minor crimes or minor disturbances. In South Africa, for example, AI-connected smart surveillance cameras have been accused of creating a "new Apartheid" because of bias within its systems. Ellison: "We have drones. A drone gets out there way faster than a police car. You shouldn't have high-speed chases with cars. You just have the drone follow a car, it's very simple. A new generation of autonomous drones. A forest fire -- the drone spots a forest fire and then the drone drops down and looks around to see if there's a human being near that heat bloom, and someone else either had an unattended campfire that caught fire, or it's arson. We can do all of that. It's all done autonomously with AI." This system is called "drones as first responders," it also already exists, and there are problems with this, too. Drones are being used to automatically surveil homeless encampments, house parties, teens being loud, and a person "bouncing a ball against a garage." AI and drone tech to prevent forest fires is also something that has had millions of dollars poured into it, with underwhelming results so far. This is all to say that Larry Ellison's surveillance fever dream isn't actually a fever dream: All of these are technologies that already exist in some form, many of which have not measurably driven down crime or solved the complicated societal, political, and economic problems that lead to crime in the first place or the societal power structures that protect police at every turn. These technologies have collectively cost billions of dollars, made surveillance companies very rich, and have created new privacy problems. Big Brother is here, and, surprise, having AI "supervision" has not created a crimeless utopia.
[6]
Oracle cloud AI will enable mass surveillance, says Ellison
AI is on the verge of ushering in a new era of mass surveillance, says Oracle cofounder Larry Ellison, and his juggernaut is rip-roaring, ready to serve as the technological backbone for such AI applications. Ellison made the comments near the end of an hour-long chat at the Oracle financial analyst meeting last week during a question and answer session in which he painted Oracle as the AI infrastructure player to beat in light of its recent deals with AWS and Microsoft. Many companies, Ellison touted, build AI models at Oracle because of its "unique networking architecture," which dates back to the database era. "AI is hot, and databases are not," he said, making Oracle's part of the puzzle less sexy, but no less important, at least according to the man himself - AI systems have to have well-organized data, or else they won't be that valuable. The fact that some of the biggest names in cloud computing (and Elon Musk's Grok) have turned to Oracle to run their AI infrastructure means it's clear that Oracle is doing something right, claimed now-CTO Ellison. "If Elon and Satya [Nadella] want to pick us, that's a good sign - we have tech that's valuable and differentiated," Ellison said, adding: One of the ideal uses of that differentiated offering? Maximizing AI's pubic security capabilities. Combining the might of Oracle Cloud Infrastructure and the capabilities of advanced AI, Ellison predicted a world of constant accountability for the Americans of tomorrow, where AI keeps everyone on their best behavior. "The police will be on their best behavior because we're constantly watching and recording everything that's going on," Ellison told analysts. He described police body cameras that were constantly on, with no ability for officers to disable the feed to Oracle. Even requesting privacy for a bathroom break or a meal only meant sections of recording would require a subpoena to view - not that the video feed was ever stopped. AI would be trained to monitor officer feeds for anything untoward, which Ellison said could prevent abuse of police power and save lives. "Every police officer is going to be supervised at all times," Ellison explained. "If there's a problem AI will report that problem to the appropriate person." But Oracle doesn't just want a hand in keeping the cops accountable. "Citizens will be on their best behavior because we're constantly recording and reporting," Ellison added, though it's not clear what he sees as the source of those recordings - police body cams or publicly placed security cameras. "There are so many opportunities to exploit AI," he said. The Oracle CTO also suggested that drones could be used to pursue police suspects instead of relying on patrol vehicle chases, and that satellite imagery of farms can be analyzed by AI to forecast crop yield and suggest ways to improve field conditions. Whatever it is, Ellison wants Oracle's share of that pie to keep growing regardless of the potential privacy implications. We reached out to Oracle to get clarification about some of Ellison's statements, but haven't heard back. ®
[7]
Oracle CEO Larry Ellison says that AI will someday track your every move | TechCrunch
Speaking at an Oracle financial analysts meeting, Oracle CEO Larry Ellison said he expects AI to one day power massive law enforcement surveillance networks. "We're going to have supervision," he said. "Every police officer is going to be supervised at all times, and if there's a problem, AI will report that problem and report it to the appropriate person. Citizens will be on their best behavior because we are constantly recording and reporting everything that's going on." Ellison believes that continuous surveillance, driven by AI, could greatly reduce crime. But the evidence doesn't necessarily support his assertion. As The Washington Post notes, police data in the U.S. is historically biased, and feeding it into an AI model could lead it to suggest more criminal activity is in those areas, creating racially and socioeconomically biased feedback loops. In 2019, the LAPD suspended its crime prediction program after an audit showed it resulted in subjecting Black and Latino people to more surveillance.
Share
Share
Copy Link
Oracle co-founder Larry Ellison predicts a future where AI cameras will monitor both police and citizens, sparking debates on privacy and the potential for a surveillance state.
Larry Ellison, the co-founder of Oracle, has sparked controversy with his recent predictions about the future of surveillance and artificial intelligence. Speaking at Oracle's CloudWorld conference in Las Vegas, Ellison outlined a vision where AI-powered cameras would be omnipresent, monitoring both law enforcement officers and citizens alike 1.
One of Ellison's key points was the idea that police officers would be under constant surveillance. He argued that this would lead to improved behavior and accountability, stating, "The policeman knows they're being watched all the time" 2. This concept aims to address concerns about police misconduct and ensure that law enforcement officers adhere to proper protocols at all times.
Ellison's vision extends beyond law enforcement to include widespread surveillance of citizens. He suggested that AI cameras would be able to detect and prevent crimes, as well as monitor behavior in public spaces. "There'll be cameras everywhere... If someone's following you, you'll know it immediately," Ellison claimed 3.
The billionaire's predictions have raised significant concerns among privacy advocates and civil liberties groups. Critics argue that such pervasive surveillance could lead to a dystopian future where personal privacy is virtually non-existent 4. There are fears that this technology could be misused for oppressive purposes or lead to a chilling effect on free speech and public behavior.
It's worth noting that Oracle is not merely speculating about these technologies. The company has been actively developing and selling AI-powered surveillance systems to law enforcement agencies and governments worldwide 5. This raises questions about the potential conflict of interest in Ellison's promotion of widespread surveillance.
The implementation of such extensive AI-powered surveillance systems would have far-reaching ethical and social implications. While proponents argue that it could lead to reduced crime rates and increased public safety, others worry about the erosion of civil liberties and the potential for abuse by those in power. The debate surrounding this technology highlights the ongoing tension between security and privacy in the digital age.
While Ellison's vision may seem futuristic, many of the technologies he describes are already in development or early stages of implementation. However, challenges remain in terms of data processing, storage, and the accuracy of AI in interpreting complex human behaviors. There are also concerns about the potential for bias in AI systems, which could lead to unfair targeting of certain communities.
Reference
[4]
Oracle co-founder Larry Ellison proposes consolidating all national data into a single database for AI analysis, sparking debates on efficiency, privacy, and surveillance.
5 Sources
5 Sources
Oracle's Chairman Larry Ellison announces ambitious plans to eliminate passwords and enhance cybersecurity using AI-enabled systems at CloudWorld 2024, promising superior protection against cyber threats.
2 Sources
2 Sources
Oracle co-founder Larry Ellison momentarily became the world's second-richest person, surpassing Jeff Bezos. This shift in wealth rankings highlights Oracle's recent success and the dynamic nature of global billionaire standings.
3 Sources
3 Sources
Oracle's Larry Ellison momentarily overtakes Jeff Bezos as the world's second-richest person, while Bill Gates drops in rankings. The AI surge propels tech billionaires' fortunes, reshaping the global wealth landscape.
4 Sources
4 Sources
Larry Ellison reveals a high-stakes dinner where he and Elon Musk implored Nvidia's CEO Jensen Huang for more GPUs. The encounter highlights the intense demand for AI chips in Silicon Valley's race for artificial intelligence dominance.
6 Sources
6 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved