— By completely rethinking the way that most Artificial Intelligence (AI) systems protect against attacks, researchers at EPFL's School of Engineering have developed a new training approach to ensure that machine learning models, particularly deep neural networks, consistently perform as intended, significantly enhancing their reliability.
— Its conclustion: it brings many compelling benefits, including long-term.
— ChatGPT and other AI systems are prone to "hallucinate" information, making it up on the spot. Researchers writing on The Conversation say Meta's CICERO AI is one of the most disturbing examples of how deceptive AI can be. This model was designed to play Diplomacy, and Meta says that it was built to be "largely honest and helpful". In fact, CICERO even went so far as to premeditate deception, where it worked with a human player to trick another human player into leaving itself to open invasion.
— Jobs held by managers, professionals, and technicians exhibit minimal vulnerability to automation, with only a small fraction of tasks at high risk and around a quarter at medium risk.
High-income countries face the potential automation of about 5.5% of total employment, whereas low-income countries encounter automation risks affecting a mere 0.4% of employment. But, the potential for augmentation appears nearly equal across countries, implying that with the right policies, AI could offer substantial benefits to developing nations.
— USA tops the charts. As of 2022, California and Texas boasted the highest demand for AI workers, followed by New York, Virginia, Ohio, Florida, Georgia, and Illinois.
— While AI has triggered a billion-dollar arms race in the US, those who are actually doing the brunt of the work are often going unnoticed, underpaid, or ignored altogether
— When it came to age, only 11% of 18- to 29-year-olds got a high score (more than 16 headlines correct), while 36% got a low score (10 headlines or fewer correct). By contrast, 36% of those 65 or older got a high score, while just 9% of older adults got a low score.
Additionally, the longer someone spent online for fun each day, the greater their susceptibility to misinformation, according to the MIST. Some 30% of those spending 0–2 recreation hours online each day got a high score, compared to just 15% of those spending 9 or more hours online.
The survey also analyzed channels through which respondents receive their news. The "legacy media" came out top. For example, over 50% of those who got their news from the Associated Press, or NPR, or newer outlets such as Axios, achieved high scores.
Social media had the news audiences most susceptible to misinformation. Some 53% of those who got news from Snapchat received low scores, with just 4% getting high scores. Truth Social was a close second, followed by WhatsApp, TikTok and Instagram.
— ost newsrooms we surveyed have already experimented with generative AI technologies like ChatGPT, but not necessarily to create content. The use cases we've learned about are quite diverse: code writing, summaries, enhancing headlines and SEO. One respondent said they were using ChatGPT as a 'banter buddy', "Imagine having a trusted companion in ChatGPT, ready to engage in lively banter and brainstorming sessions," they said.
— Cyber attacks primarily target users in Asia-Pacific. India impacted with 12,632 compromised ChatGPT accounts.
— The Asia-Pacific region accounted for nearly 41,000 compromised accounts between June 2022 and May 2023, followed by Europe with almost 17,000. Surprisingly, North America ranked fifth, with approximately 4,700 affected accounts.
— Some of the intended use cases for AI in Pakistan include predicting the weather, agriculture supply chain optimization and health services transformation.
— Between March 6 and April 28, a dummy portfolio of 38 stocks gained 4.9% while 10 leading investment funds clocked an average loss of 0.8%, according to an experiment conducted by financial comparison site finder.com.
— Smart digital technologies are already widely used in agriculture in high-income countries. For instance, AI is being used in robotic milking systems in places such as Braz, Austria, to decide which cow should be milked when, with little supervision from the farmer. In low-income countries, on the other hand, AI is mainly limited to small-scale smart farming and satellite imagery processing at the level of smallholder farming. But looking around, we can anticipate future uses, if an appropriate business model can be found.
In Senegal, chatbots are being used instead of customer care operators as clients and investors alike want to reduce costs. In the Philippines, some of its 1.2 million business-process-management jobs — to a large extent customer care for global clients — could be replaced through robotic process automation.
"ITC's SME Trade Academy is already experimenting with tools such as Synthesia to produce videos with human-like avatars in multiple languages and accents. We need to help our beneficiaries leverage this technology as well, for instance the tailor in Burundi who is using ChatGPT to draft marketing materials like brochures and website content."
— In customer support, generative AI — including GPT3+ and other Large Learning Models, is transforming conversational chatbot capabilities into one that feels natural, is more accurate, and is better able to sense and react to tone and emotions.
Around business insights, generative AI now allows business users to ask questions in natural language. The AI can now convert these into SQL queries, run against the internal databases, and return the answer in a structured narrative — all within minutes. The advantage here isn't just the efficiency — it's the speed of decision-making and the ability for business users to interrogate the data more directly and interactively.
Third, in programming automation, Large language models are highly accurate in multiple languages — including computer language. Software developers are reducing the time to write code and associated documentation by almost 50%. For example, the Microsoft Power Automate program — a tool for robotic process automation — can now be programmed using natural language to automate tasks and workflows in a more intuitive and user-friendly manner. Not only is this more efficient than getting large teams of programmers and testers involved, but it also reduces time and iteration to get automation up and running.
—Enterprises should set up a small focused group tasked with experimenting with Generative AI and reengineering core business processes. This group should report to the highest levels in the organization and be goaled with figuring out how to disrupt current processes and business models. Because it has the potential to disrupt existing ways of doing things, it requires a sharp focus and clear sponsorship.
— in the order of how easy it will be for ChatGPT to replace, are:
A recent World Economic Forum report said that 85 million jobs would be dispaced by 2025 as a direct consequence of the tug-of-war between humans and machines.
— In a previous case a U.S. circuit court ruled: "Google's commercial nature and profit motivation do not justify denial of fair use" for digitizing the texts of published and copyrighted books. But "the works produced by a generative AI system may serve as a significant market substitute — this weighs against fair use."
"The answer is not clear at this point whether a generative AI system may use input data that is protected by copyright law. On one hand, new works are being created from the input data with minimal display of any copyrighted works. "
— Is ChatGPT taking my job someday? ChatGPT claims humans have little to worry about, but I'm not so sure.
— Herzog opened his speech with an intro written by ChatGPT. He then ended the speech with a ChatGPT generated "inspirational quote" about AI.
— Judge Juan Manuel Padilla Garcia, a judge in the First Circuit Court in Cartagena, Colombia, confirmed that he used the AI tool to "pose legal questions about the case and included its responses in his decision." He said: "What we are really looking for is to optimize the time spent drafting judgments after corroborating the information provided by AI."
— 5: The number of days that passed after the launch of ChatGPT in November 2022 for the generative AI model to reach more than one million users.
— When I asked if it had ever been to a restaurant I knew would have loads written about it online, ChatGPT wrote back: "As an AI, I do not have physical presence and therefore I haven't been to any physical location."
But when I asked it to write a restaurant review of Alinea, it didn't hesitate: "— is a truly unique dining experience. The food is meticulously crafted and visually stunning, with an emphasis on creativity and innovation. The flavors are bold and complex, and the dishes are expertly paired with a wide selection of wines. The service is impeccable, with attentive and knowledgeable staff who are happy to explain the dishes and the techniques used to prepare them. The atmosphere is elegant and intimate, making it the perfect spot for a special occasion. Overall, Alinea is a must-visit for anyone interested in cutting-edge culinary experiences."
While basically coherent, and not too far off from some Yelp reviews, you may have noticed an almost complete lack of specifics. Replace the restaurant's name and the "review" could apply to any number of fine-dining restaurants in Chicago.
"ChatGPT can get more specific, but I found it also became less accurate. When I asked it to write a detailed restaurant review of Alinea, mentioning every single course, it started spitting out names of dishes I don't think ever appeared on the restaurant's menu."
— He notes ChatGPT's "comical faitures": From being horrible at math, to pretty blatantly plagiarizing, to not so comical — and really just all-around horrifying — outcomes like one image generating AI's failure to protect against the creation of non-consensual pornography, generative AIs have a lot of harmful downsides.
— Almost three in ten companies (28%) say AI is now a regular part of their activities, with a further 39% saying they have been conducting experiments in this area.
— A whopping 98% of human resources leaders say they'll rely at least partly on software programs, or algorithms, to decide whom to cut if they have to conduct layoffs in an anticipated recession in 2023, according to a recent survey by Capterra, which helps small businesses choose software. That's up from just 2% of large companies that turned to Big Data in the Great Recession of 2007-09, according to Capterra, a unit of tech research giant Gartner. In November, Capterra surveyed 300 HR managers at mostly larger firms as well as some small to mid-size businesses.
— A not-yet-peer-reviewed paper on ChatGPT's ability to pass the United States Medical Licensing Exam (USMLE) lists 11 researchers affiliated with the healthcare startup Ansible Health — and ChatGPT itself, raising eyebrows amongst experts.
— Research shows that in many European countries and the U.S., the growth of information and computerized technologies was accompanied by a significant expansion of professional and managerial occupations and a decline of low-skilled jobs.
— Students who use ChatGPT can further refine their paper by using other programs like Grammarly, which corrects spelling and grammar mistakes and assesses style and tone. And of course, students can also rewrite passages in their own voice. "Change some words, run it through another checking system," says a professor. "There's really no way to win this fight right now."
Another says there are ethical ways for educators to use the technology in class — for example, by comparing the AI's writing to a student's and analyzing the strengths and weaknesses of using the tool.
She says a class could use ChatGPT to write an essay on the plot of "Hamlet" and then analyze the quality of what it spits out. They could also question the references used and whether the analysis was accurate. She says that would allow students to think critically on the effectiveness of the tool.
She also notes that ChatGPT does not produce a bibliography to go with the essays it writes — a red flag for professors trying to determine if a bot was involved. However she notes that there are online tools that can generate footnotes and sourcing for an existing paper.
— "Back in 2014, the Cambridge Analytica scandal showed that it was possible to influence citizens' votes by presenting them with fabricated information. Since then, the game Candy Crush has shown that it is possible to create enough of an addiction to then charge for continued playing. Meta has indicated that it has the ability to modify its algorithms to improve the wellbeing of its users. And the voluntary spreading of misinformation has convinced 1 in 10 French people that the Earth is flat."
— ChatGPT Pros: Open source platform. It can be used to build complex conversational applications. Easy-to-use APIs. ChatGPT Cons: Lacks the advanced features of other popular AI tools on the market.Limited support for languages other than English.
— The paper was way too coherent and well-structured. "I had [the student] rewrite the paper," the Northern Michigan philosophy professor told Futurism. "That's what I almost always do in plagiarism cases. I want the students to actually learn the material, and the only way they can do that is by actually completing the assignment," he said. "I fail students for an assignment or a class only if they are repeat offenders."
— Test it out and see if you can write a better job description faster than ChatGPT can. I bet you can't, because ChatGPT will give it to you in under two seconds.
— Within the first week of its launch, more than 30,000 people used the tool, per NPR's Emma Bowman. On Twitter, it has garnered more than 7 million views.
— OpenAI, the company behind the first GPT and its subsequent versions, added guardrails to help ChatGPT evade problematic answers from users asking the chatbot to, for example, say a slur or commit crimes. Users, however, found it extremely easy to get around this by rephrasing their questions or simply asking the program to ignore its guardrails, which prompted responses with questionable — and sometimes outright discriminatory — language.
— Among other things, ChatGPT can build websites and apps for you. You can even ask for code in a specific language. It can also solve errors, explain the code and even explain the test cases. It literally can explain each and every line of code. And that's what makes you more productive.
— Its conclustion: it brings many compelling benefits, including long-term.
Back to top Previous Page