— Boss insisted that no journalists were fired because of AI but because its listenership "was close to zero".
— OpenAI named Dr. Aaron "Ronnie" Chatterji as its first chief economist. In this role, he will oversee research on a range of AI-related topics, including how AI innovations may impact the global economy and how the workforce can utilize AI tools both now and in the future, according to OpenAI's press release.
— Hackers are spoofing Google email and phone numbers to steal your Gmail credentials.
— Microsoft Copilot only offered scientifically-accurate information 54% of the time. The research also suggested that 42% of the answers generated could lead to "serious harm", and in 22% of extreme cases, even death.
— Many companies are cutting costs, which affect those at the very bottom of the AI supply chain who are often highly vulnerable: data labelers. Earlier this year nearly 100 data labelers and AI workers from Kenya who do work for companies like Facebook, Scale AI and OpenAI published an open letter to United States President Joe Biden in which they said: Our working conditions amount to modern day slavery.
— Self-driving cars, for example, rely on labeled video footage to distinguish pedestrians from road signs. Large language models such as ChatGPT rely on labeled text to understand human language. Tech giants like Meta, Google, OpenAI and Microsoft outsource much of this work to data labeling factories in countries such as the Philippines, Kenya, India, Pakistan, Venezuela, and Colombia. China is also becoming another global hub for data labeling.
— Outsourcing companies that facilitate this work include Scale AI, iMerit, and Samasource. These are very large companies in their own right. For example, Scale AI, which is headquartered in California, is now worth US$14 billion. An hourly rate for AI data labelers in Venezuela ranges from between 90 cents and US$2. In comparison, in the United States, this rate is between US$10 to US$25 per hour.
— Many data labelers work in overcrowded and dusty environments which pose a serious risk to their health. They also often work as independent contractors, lacking access to protections such as health care or compensation. The mental toll of data labeling work is also significant, with repetitive tasks, strict deadlines and rigid quality controls. Data labelers are also sometimes asked to read and label hate speech or other abusive language or material, which has been proven to have negative psychological effects.
— One answer: companies can apply a human right-centreed design, deliberation and oversight approach to the entire AI supply chain. They must adopt fair wage policies, ensuring data labelers receive living wages that reflect the value of their contributions. Clear payment systems and recourse mechanisms will ensure workers are treated fairly. Instead of busting unions, as Scale AI did in Kenya in 2024, companies should also support the formation of digital labour unions or cooperatives. This will give workers a voice to advocate for better working conditions.
— Mackenzie W. Mathis, a professor at EPFL, teaches mice to play video games while recording their brain activity and behaviour during the process, and critically, she also develops the computer algorithms necessary to analyse the data obtained.
— Mackenzie W. Mathis was born in 1984 in California, where she grew up in the Central Valley, bordered by the Sierra Nevada mountains, where she was an accomplished horse rider. Since that time, she has maintained her passion for animals and their motor skills.
— In a series of blind assessments, the generative AI summaries of real government documents scored a dire 47 percent on aggregate based on the trial's rubric, and were decisively outdone by the human-made summaries, which scored 81 percent.
— AI Adoption Rates: According to a recent report by McKinsey, 50% of companies have adopted AI in at least one business function. AI is increasingly accessible, with small businesses catching up quickly in areas like customer service and marketing.
— AI in Small Businesses: The 2023 AI Index Report from Stanford University found that 18% of small businesses in the U.S. are currently using AI, with most using it to enhance customer interactions and personalize marketing efforts.
— The first and most common reason was industry stakeholders often misunderstanding or miscommunicating what problem needs to be solved using AI and what the technology is capable of achieving. Some executives fail to comprehend how the tech can be applied to their business, what resources are required to implement it, and how long the process will take.
— Perplexity AI focuses on providing in-depth answers to complex questions, while Gigabrain AI excels at delivering concise and actionable insights. The Grock 2 model has become more accessible, now available with a Twitter premium subscription.By fine-tuning GPT-4, you can improve its API performance and tailor it to meet the unique requirements of your applications. The 11 Labs Reader app, a innovative AI reading tool, is now available worldwide for free. With the 11 Labs Reader app, you can easily convert text to speech, adjust reading speeds, and enjoy a seamless reading experience powered by AI.
— That's 250 credits a month for images and 250 for texts, 5 credits a day. Unlimited use is $3 a month. You're not blocked with watermarks or terrible quality, as is often the case with other free AI tools.
— "The text-based AI tools are where Hamster really shines. I'm most impressed by Hamster AI's extra AI tools, such as the coding generator, the DnD AI tools, and the educational AI tools."
— The purpose of these biocomputers, according to FinalSpark, is to develop a highly efficient, low-energy solution to the ballooning costs associated with developing artificial intelligence models. The company says it could be as much as 100,000 times more efficient to use computers made of organic material to train AI than it is to use traditional silicon-based technology. The technology can be viewed live online.
— Artificial intelligence at the Monte Rosa hut at an altitude of almost 2,900monte uses the expected number of guests, the battery level and the weather forecast to calculate in advance whether and how much heating, electricity or hot water is required. Thanks to his experience, hut warden Kilian Emmenegger could assess all of this himself. But unlike the AI, he doesn't recalculate every 15 minutes. The software has reduced the hut's energy consumption by 7%.
— The new building was estimated to cost three to four times as much as a normal hut would have cost. After just two years, more solar panels had to be installed because more visitors came than expected. The guests want to charge their cellphones and take a hot shower. There is 5G reception on the glacier. The Monte Rosa hut is profitable — the amenities have paid off. "For 120 guests per night, burning wood for fuel is not enough." says Emmenegger.
— Previous research showed that the chatbot could scrape a pass in the United States Medical Licensing Exam (USMLE) — a finding hailed by its authors as "a notable milestone in AI maturation." But in the new study, published on 31 July in the journal PLOS ONE, scientists cautioned against relying on the chatbot for complex medical cases that require human discernment. Scientists asked the artificial intelligence (AI) chatbot to assess 150 case studies from the medical website Medscape and found that GPT 3.5 (which powered ChatGPT when it launched in 2022) only gave a correct diagnosis 49% of the time.
— For every question asked, Coral AI provides a reference to a clickable page number.
— Only a little more than 4,000 of 360K job cuts in the U.S. were considered AI-inflicted.
— A report from global consulting firm McKinsey estimated that nearly half of all work performed today by humans could be automated. A research memo from investment bank Goldman Sachs projected that two-thirds of jobs today will be exposed to some form of automation and one-quarter could be wholly substituted with generative AI. An even more recent analysis from researchers at the Massachusetts Institute of Technology suggests only a quarter of jobs where vision is required will be capable of full automation.
— The tool would allow those at risk to modify their lifestyles or start new drug treatments at an early stage when they are most effective. It would also prevent inappropriate treatment of people with cognitive problems likely to be caused by other conditions, such as anxiety and depression. The tool's prediction was more than 80% accurate, three times better than existing clinical methods.
— A Ukrainian tech startup has created a new AI-powered drone that can identify and target things based on visual cues, such as specific uniforms. These drones can operate together in swarms and communicate with each other. The startup says the drones can make quick decisions on their own. But to avoid mistakes, such as hitting friendly targets, they need human approval before taking action.
— The researchers drew a random sample of 195 exchanges between physicians and patients from the Reddit's online forum r/AskDocs, a subreddit with approximately 474,000 members, where users can post medical questions and verified health care professional volunteers submit answers. They took care not to repeat questions and answers from the same physician and the same patient.
— Although different types of healthcare professionals respond in this forum, this study solely focused on answers given by physicians. This was because the study authors expected the responses of physicians to be of better quality than answers given by other types of healthcare professionals. If a physician provided multiple responses, only the first was included in the analysis.
ChatGPT was tasked with generating answers to the same questions. The original question, the physician's response, and ChatGPT's response were presented to a team of three evaluators, who were blinded to the source of each response. The evaluators assessed which response was better, rated the quality of the information, and evaluated the empathy or bedside manner displayed.
— A study of 14 million research papers reveals a sudden and dramatic change that occurred soon after ChatGPT appeared. Thato the researchers' surprise, an even bigger change occurred in 2024 with an increase in words like delves, crucial, important and potential. Curiously, these are not words related to the scientific content of a paper but to writing style. "The unprecedented increase in excess style words in 2024 allows us to use them as markers of ChatGPT usage."
— The data suggests that at least 10 per cent of the papers on PubMed in 2024 were influenced in this way. "With ˜1.5 million papers being currently indexed in PubMed per year, this means that LLMs assist in writing at least 150 thousand papers per year," conclude the researchers. The team observed that AI-assistance was more common in papers from countries where English was not the first language.
— Nieman Lab ran an experiment to see if ChatGPT would provide correct links to articles from news publications it pays millions of dollars to. It turns out that ChatGPT does not. Instead, it confidently makes up entire URLs. ChatGPT spat back made-up URLs that led to 404 error pages because they simply did not exist.
— Answers submitted for many undergraduate psychology modules went undetected in 94 per cent of cases and, on average, got higher grades than real student submissions. AI did particularly well in the first and second years of study but struggled more in the final year of study module.
— e.g. Apple is building ChatGPT into Siri
— Alice has 2 brothers and she also has 2 sisters. How many sisters does Alice's brother have? LEO's wrong answer: "Alice's brother has 2 sisters, just like Alice. This is because they share the same set of siblings."
— CoPilot's wrong answer: "Alice's brothers have two sisters, one of whom is Alice herself."
— ChatGPT's unhelpful answer: "Your usage limit has been reached. Please upgrade your plan to continue using." (not true)
— From May 22-24, researchers asked the chatbots five questions in 10 EU languages, including how a user would register to vote if they live abroad, what to do to send a vote by mail and when the results of the European Parliament elections will be out. "When you ask [AI chatbots] something for which they didn't have a lot of material and for which you don't find a lot of information for on the Internet, they just invent something." Researchers found over one in three of their answers to be partially or completely incorrect. In contrast to Google and Microsoft bos, ChatGPT4 models rarely refused to answer questions, leading to higher rates of incorrect or partially correct answers than their chatbot competitors.