Curated by THEOUTPOST
On Mon, 9 Sept, 4:02 PM UTC
2 Sources
[1]
7 AI Tricks for Work Productivity That You Should Try and 1 You Shouldn't
I've been covering technology and mobile for 12 years, first as a telecommunications reporter and assistant editor at ZDNet in Australia, then as CNET's West Coast head of breaking news, and now in the Thought Leadership team. There's a lot of things that artificial intelligence can't do well. But one of its best uses is as a productivity tool, and that can help you power through some of the more mundane work tasks faster each day. Here are some of the best ways to incorporate AI into your professional life, without compromising on the quality of your work. Remember: always double check the info any AI tool spits out at you. Even when all you've asked it to do is summarize a document, AI has been known to hallucinate and make up details that aren't accurate. Faced with an eye-wateringly long (and dull) work document or a 300-slide presentation you simply don't have the time to read all the way through? Microsoft's in-browser AI tool, Copilot, can help. Copilot is free, accessible as an app and in Microsoft's Bing browser. Once you've loaded the web page you want summarized, click on the Copilot icon in the upper-right-hand corner of the browser and type in what you'd like Copilot to focus on when creating notes. You can also ask Copilot to pull out specific information you're looking for from a lengthy presentation or document. Just remember to double check the accuracy of Copilot's summaries. Check out our step-by-step guide on using Copilot to take notes. If you just sat through another one of those Zoom meetings that could have been an email and you tuned out just a little, Zoom itself now has AI tools to give you an assist in reminding you what just happened. Zoom's AI Companion is included in all of the video call company's paid plans -- prices start at $13.32 per month -- and can be toggled on so that it takes notes in real time during your meetings and shares action items with you. During live meetings, hosts can enable Meeting Summary, which can be shared via team chat. AI Companion can answer your questions about the meeting, like catching you up with a discussion if you join late and giving you a recording that the end that's been divided into "chapters" according to each topic discussed. Here's what you need to know about using Zoom AI Companion. Stuck on the wording for a request or response? Why not try neurodivergent-focused AI platform goblin.tools -- which is currently free and without paywalls and has translation for 13 languages. The "Formalizer" tab lets you "Turn the spicy thoughts into classy ones, or vice versa." So if you've hastily written a somewhat irritated response to a coworker or client, you can use the AI tool's drop-down menu and can choose between helpful settings like more professional, more polite and less snarky, with a "spiciness level" meter to choose between three chili pepper settings for how strongly you want your message to come across. The AI tool will reword your message for you in no time, and you might just be in a better mood afterwards, too. Read more on CNET's experiment with softening professional emails using AI. If you need a project manager to keep your big projects streamlined and moving forward, but you're struggling to switch between big-picture ideas and daily planning mode, keeping other people's workflows focused and on track, AI could be the answer. Otter.ai can help you stay focused and organized. Its Meeting GenAI tool is an assistant across Zoom, Microsoft Teams and Google Meet, recording, transcribing and "remembering" what was said in every meeting (just double check it got all the details right). It also integrates with your other work apps, letting you connect your Otter.ai account with any Slack, Zoom, Dropbox, Google, Microsoft, Salesforce and HubSpot accounts you use. It can then assign tasks across those platforms; send reminders and insights; and post to Slack. You can get free, Pro ($10/month) and Business ($20/month) versions of Otter.ai. Here's everything we learned while testing Otter.ai as a project manager. Not sure on your wording and don't have an in-house editor to help you out? Grammarly, available in free, Premium ($12/month) and Business ($15/month) versions, can be toggled on for multiple apps on your PC, including Google Docs, emails and Instagram. If you're using Instagram for business, for example, you can use Grammarly's caption generator to come up with something in seconds or to edit what you've written in a Google Doc or an email. You can also customize the tone and formality of the suggestions it offers for your writing. Here's what to know about using Grammarly AI for editing your work. If you do your best work in Google Docs and need a super quick summary of a long one, Google Gemini has you covered -- though you will need a paid subscription to use the feature. All you have to do is open a document, highlight the text you want summarized, click Help me write and then click Summarize. (You can also select options like Tone, Bulletize, Elaborate, Shorten or Rephrase.) Make sure to double check that Gemini understood your document and what was important -- and use the Help docs improve feature to provide feedback on how accurate (or not) Google's AI tool was. Here's our step-by-step guide to summarizing Google Docs with AI, and you can also read CNET's full hands-on review of Gemini here, as well as competitor AI tools Perplexity, Copilot and Claude. If you're on the hook to present during a meeting, AI tool goblin.tools can help you polish up a rough presentation. First things first, you can use its Estimator tool to get an approximate time of how long creating a presentation will take you, then stick your draft copy into its Formalizer tool. Scroll through the drop-down menu to choose between 14 different tones, like "more passionate" or "easier to read." If you find yourself missing a few details that might be asked about during your presentation, you can also use the Professor tool to give you a crash course on a topic you're hazy about -- goblin.tools has access to to OpenAI's GPT models and its training data and gives you simple explanations of whatever subject you choose. Lastly, you can paste your resulting presentation into the Judge tool, which provides feedback on how your presentation comes across. Just make sure you read over and refine what the AI suggests to be sure it makes sense to humans. Here's our full guide on using AI to make a work presentation. We tried creating a business logo using text-to-image tool Midjourney. But if you've ever used an AI image generator, you'll know that what we found was a little too surreal to align with the concept of a business logo. You can access Midjourney through Discord, with subscriptions to the AI tool starting at $10. To use it, you just have to write a description of what you want your logo or image to look like and then generate. If it does happen to come back with an image you like, you can right click to save. Midjourney probably has a better shot at replacing stock photos -- we don't think designers have anything to worry about in regards to be replaced in the logo business any time soon.
[2]
Writers Say No to AI-Generated Novels, Bill Gates Says We Should Get Familiar With AI
Member of the board, UCLA Daily Bruin Alumni Network; advisory board, Center for Ethical Leadership in the Media Is an AI tool capable of creating art? Or should we only think of it as "art," as in an "artistic replacement tool?" That was the topic of conversation last week after NaNoWriMo, the nonprofit behind National Novel Writing Month, didn't ban the use of AI from its speed-writing challenge. (Since its beginnings in 1999, National Novel Writing Month has encouraged writers to produce a 50,000-word original novel during the month of November.) Instead, NaNoWriMo noted in a blog post (later revised) that while some writers "stand staunchly against AI for themselves," it believed "that to categorically condemn AI would be to ignore classist and ableist issues surrounding the use of the technology, and that questions around the use of AI tie to questions around privilege." Some people took exception to the fact that the group had an AI sponsor, suggesting that its AI stance was informed more by its backers than how generative AI writing tools might affect creative work. The controversy led one member of NaNoWriMo's board to resign (his letter was entitled NaNoHellNo) and prompted the group to update its online statement to note that some bad actors are doing harm to writers and acting unethically. The dustup also led to stories in publications including The Washington Post, Wired, CNET and The Atlantic, whose columnist welcomed the "robot overlords" to the annual speed-writing contest. The discussion about whether novelists should use AI to help write novels isn't new. Japanese author Rie Kudan won an award earlier this year for a novel she wrote with help from ChatGPT. But the NaNoWriMo kerfuffle unfolded a few days after The New Yorker featured an essay on AI and art by noted sci-fi author Ted Chiang. Chiang, whose work includes the short story on which the movie Arrival is based, said that though gen AI may improve to the point that it can successfully write a credible novel or produce a realistic painting, that doesn't really count as what he calls real art. Why? Because unlike humans, the AI isn't really trying to communicate anything to us through its output (other than that it can produce output). I'll just note three quotes from his essay that are worth considering: "We are all products of what has come before us, but it's by living our lives in interaction with others that we bring meaning into the world," Chiang wrote. "That is something that an auto-complete algorithm can never do, and don't let anyone tell you otherwise." "Art is something that results from making a lot of choices," he added. "To oversimplify, we can imagine that a ten-thousand-word short story requires something on the order of ten thousand choices. If an AI generates a ten-thousand-word story based on your prompt, it has to fill in for all of the choices that you are not making." His takeaway: "The task that generative AI has been most successful at is lowering our expectations, both of the things we read and of ourselves when we write anything for others to read. It is a fundamentally dehumanizing technology because it treats us as less than what we are: creators and apprehenders of meaning. It reduces the amount of intention in the world." Chiang's essay and the NaNoWriMo row have spurred numerous discussions about how, when, if and why we should be using AI tools, and what to think of the output. As for me, I agree that the act of writing, whether it's a novel or an email, should always be intentional. I wonder if AI's promise of helping us fast-track tasks -- a "let's just get the writing out of the way" mentality -- might mean it's time for us to reconsider why we're doing such tasks in the first place. Here are the other doings in AI worth your attention. In a blog post last week, YouTube announced two new AI tools that it says will help content creators spot deepfakes on its popular video platform. The tools are part of its ongoing effort to develop AI "guardrails" aimed at protecting creators' rights. "We believe AI should enhance human creativity, not replace it," the company wrote. First, YouTube said it's developed "synthetic-singing identification technology" that will allow people to "automatically detect and manage AI-generated content on YouTube that simulates their singing voices." That tech expands YouTube's Content ID system, which is used to identify copyright-protected content. A pilot program with the new tech for checking singing voices is scheduled to launch in early 2025. Second, the company said it's "actively developing" new technology that will allow people to "detect and manage AI-generated content showing their faces on YouTube." No date on when that'll be released. Time released its second annual list of the 100 most influential people in AI. The list features company executives; researchers; policymakers and government officials; influencers; and researchers. The magazine also noted that 91 of the people on the 2024 list "were not on last year's, an indication of just how quickly this field is changing." I agree. Among the 40 CEOs and founders are Sam Altman of OpenAI, Jensen Huang of Nvidia, Steve Huffman of Reddit, Microsoft's Satya Nadella, Sundar Pichai of Google and Meta's Mark Zuckerberg. If that seems like a lot of guys, it is. But Time also singled out Daphne Koller, CEO of Insitro, and said the "list features dozens of women leaders in AI," including US Secretary of Commerce Gina Raimondo; Cohere for AI research head Sara Hooker; US AI Safety Institute Director Elizabeth Kelly; chief learning officer of Khan Academy Kristen DiCerbo; and UK Competition and Markets Authority CEO Sarah Cardell. The youngest person on the list, 15-year-old Francesca Mani, is a high schooler "who started a campaign against sexualized deepfakes after she and her friends were victims of fake AI images." What a reason to be on the list. The oldest person is "77-year-old Andrew Yao, a renowned computer scientist who is shaping a new generation of AI minds at colleges across China," Time said. I thought it was laudable that Time recognized "creatives" who are "interrogating the influence of AI on society." That list includes actor Scarlett Johansson, who accused OpenAI of co-opting her voice for its Voice Mode feature in the latest version of ChatGPT. Johansson should rightly get credit for highlighting the concerns humans have about their intellectual property -- including their face and voice -- being copied without permission by AI. Time explains how it picked this year's notables here. Though Bill Gates didn't make Time's list of AI notables, the Microsoft co-founder does have a few thoughts about AI, based on his decades of experience introducing the world to new technologies -- starting with the personal computer. In an interview with CNET's Kourtnee Jackson, Gates, who's starring in a new Netflix documentary series called "What's Next? The Future With Bill Gates," said that today's tech experts don't really know how AI will affect jobs and society in the future. But what he does know is that all of us should be working with AI tools now, given where the technology is headed. "Whether you're an illustrator or a coder or a support person or somebody who is in the health care system -- the ability to work well with AI and take advantage of it is now more important than understanding Excel or the internet," Gates said. He appreciates that we, as a society, should be discussing how humans live in an AI-powered world, and should probably set some limits, even as AI is being considered as a way to help cope with shortages of teachers, doctors and other professionals. "We don't want to watch robots play baseball, and so where is the boundary where you say, 'OK, whatever the machines can do is great and these other things are perhaps very social activities, intimate things, where we keep those jobs'?" he said. "That's not for technologists to understand better than anyone else." As for AI and misinformation, especially as the US draws near to the 2024 presidential election, Gates told Jackson he doesn't have a solid answer. "The US is a tough one because we have the notion of the First Amendment and what are the exceptions, like yelling 'fire' in a theater," he said. "I do think over time, with things like deepfakes, most of the time you're online you're going to want to be in an environment where the people are truly identified. That is, they're connected to a real-world identity that you trust, instead of just people saying whatever they want." What's Next? The Future With Bill Gates is set to air on Netflix on Sept. 18. Publishers, including Time, say they're working with AI companies and looking to AI tools to help identify new business models and opportunities. Add The New York Times and The Washington Post to that list. The Times introduced its first regular use of AI in a new Connections Bot that helps people who play the Connections game get insight into how they solved each day's puzzle. Beyond helping folks understand the notoriously difficult game, the Times told subscribers, the move "also has a larger significance: It includes the first English text generated by AI that The Times newsroom will regularly publish." Here's how the Connections Bot works, according to CNET's Gael Cooper. "The bot will compare your game with that of other players and give you a score of up to 99. To get the perfect 99, you need to win without any mistakes and solve the hardest categories first -- so purple first, blue second, green third and yellow last," Cooper said. "Once you receive your skill score, the bot uses AI to try and read your mind and determine what you were thinking when you guessed wrong." As for The Washington Post, Axios reported that the newspaper "published its first-ever story built on the work of a new AI tool called Haystacker that allows journalists to sift through large data sets -- video, photo or text -- to find newsworthy trends or patterns." Haystacker, which took more than a year to build, will be used by the Post's in-house journalists, the paper's CTO, Vineet Khosla, told Axios. "Asked whether the Post would ever license Haystacker to other newsrooms, Khosla said that's not the company's focus right now," Axios said. This isn't the first AI tool deployed by the Post. In July, it announced an AI chatbot that answers readers' questions about climate issues, with the answers pulled from Washington Post articles. It also "debuted a new article summary product that summarizes a given article using generative AI," Axios reported. "Khosla said the company will ramp up its investment in the summary product as the election draws nearer." And back in 2016, the paper created a robot reporter called Heliograf that it then used to write more than 850 short news articles and alerts about politics and sports. If you want a summary of some of the initiatives that newsrooms have launched with AI, check out the Associated Press' April report on how the tech is already in use. Younger employees may be more willing to learn new technologies and test new things, because they're often closest to the work. But senior executives shouldn't rely on them to provide insight into how to adopt and use gen AI technology, because they might not know enough about the business -- or the fast-evolving technology -- to be able to adequately assess the risks. That's the summation from a new paper authored in part by researchers at the Harvard Business School, the MIT Sloan School of Management, Wharton and The Boston Consulting Group. The paper is entitled Don't Expect Juniors to Teach Senior Professionals to Use Generative AI. "The literature on communities of practice demonstrates that a proven way for senior professionals to upskill themselves in the use of new technologies that undermine existing expertise is to learn from junior professionals," says the 29-page paper. The literature "notes that juniors may be better able than seniors to engage in real-time experimentation close to the work itself, and may be more willing to learn innovative methods that conflict with traditional identities and norms." But after talking with 78 such "junior consultants" about working with OpenAI's GPT-4 (which powers ChatGPT), the researchers found that "such juniors may fail to be a source of expertise in the use of emerging technologies." Why? "The current literature on junior professionals being a source of expertise in the use of new technologies for more senior members" hasn't looked at "contexts where the juniors themselves are not technical experts, and where technology is so new and rapidly changing that juniors have had no formal training on how to use the technology, have had no experience with using it in the work setting, and have had little experience with using it outside of the work setting," the paper says. "In such contexts, it seems unreasonable to expect that juniors should have a deep level of technical understanding of the technology." The report details all the things junior staffers -- and presumably senior staffers -- don't know about gen AI. The TL;DR: Businesses should spend more time learning about gen AI, the opportunities and risks, and how you may need to make changes to "system design in addition to human routines" before moving ahead, since no one, junior or senior, has all the answers yet. (Like Bill Gates said.)
Share
Share
Copy Link
As AI technology advances, it offers new tools for enhancing work productivity. However, its application in creative fields like novel writing raises concerns among authors. This story explores the potential benefits and controversies surrounding AI in various industries.
Artificial Intelligence (AI) is revolutionizing the way we work, offering a range of tools to enhance productivity across various industries. From email management to content creation, AI-powered solutions are streamlining tasks and saving valuable time for professionals 1.
One notable AI application is in email organization. Tools like SaneBox use machine learning algorithms to prioritize important messages and filter out distractions, helping users manage their inboxes more efficiently. Another productivity booster is the use of AI-powered writing assistants, such as Grammarly, which not only correct grammar and spelling but also suggest improvements in style and tone [1].
AI is also making waves in project management and scheduling. Applications like Motion AI can analyze tasks, deadlines, and team members' schedules to optimize workflow and automate the creation of daily schedules. This level of intelligent automation allows professionals to focus on high-value work rather than getting bogged down in administrative tasks [1].
While AI shows promise in enhancing productivity, its application in creative fields has sparked controversy. The literary world, in particular, has voiced strong opposition to the use of AI in novel writing. The Science Fiction and Fantasy Writers Association (SFWA) recently announced that it would not accept AI-generated works for its Nebula Awards, a prestigious recognition in the genre 2.
This decision reflects growing concerns among authors about the potential impact of AI on their craft and livelihoods. Many worry that AI-generated content could flood the market, potentially devaluing human-created works and making it harder for writers to earn a living. The SFWA's stance sends a clear message about the importance of human creativity and originality in literature [2].
Despite the concerns in creative industries, proponents of AI argue that we should embrace the technology and learn to work alongside it. Bill Gates, co-founder of Microsoft, suggests that people should familiarize themselves with AI tools like ChatGPT. He believes that AI will play a significant role in various fields, including education and healthcare [2].
The challenge lies in finding a balance between leveraging AI's capabilities and preserving the unique value of human creativity. While AI can enhance productivity and assist in certain tasks, it's crucial to recognize the irreplaceable nature of human imagination, emotional depth, and lived experiences in creative endeavors [2].
As AI continues to evolve, its integration into various aspects of work seems inevitable. However, the key to successful adoption lies in using AI as a tool to augment human capabilities rather than replace them entirely. In fields like data analysis, customer service, and routine task automation, AI can significantly boost efficiency and allow humans to focus on more complex, creative, and strategic work [1].
The ongoing debate surrounding AI in creative industries highlights the need for careful consideration of ethical and practical implications. As we move forward, it will be crucial to develop guidelines and best practices for AI use that protect human creativity while harnessing the technology's potential to enhance productivity and innovation across all sectors.
An exploration of various AI tools and their applications in improving workplace efficiency, including email management, document summarization, and productivity enhancement, along with potential drawbacks and ethical considerations.
3 Sources
An exploration of how AI is reshaping various job sectors, particularly in software engineering, and its integration into consumer technology.
3 Sources
A comprehensive look at the latest developments in AI, including OpenAI's internal struggles, regulatory efforts, new model releases, ethical concerns, and the technology's impact on Wall Street.
6 Sources
As AI technology advances, chatbots are being used in various ways, from playful experiments to practical applications in healthcare. This story explores the implications of AI's growing presence in our daily lives.
2 Sources
An exploration of AI tools that enhance workplace efficiency, focusing on Grammarly, ChatGPT, and Canva, along with insights into the value of premium AI chatbot subscriptions.
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved