Latest News

Last updated 13 Apr, 06:10 PM

BBC News

Southport killer's parents failed in 'moral duty' to report son - Failing to appreciate the danger the killer posed led to "catastrophic consequences", an inquiry finds.

Five key failures of killer's parents and agencies ahead of Southport attack - Inquiry Chair Sir Adrian Fulford said the Southport attack could have been prevented if authorities and the killer's parents had acted more quickly.

EasyJet passengers describe EU border 'nightmare' after flight leaves without them - Airlines warn of further disruption due to the introduction of a new EU digital border control system.

Henry Zeffman: Starmer seeks closer ties with EU - and doesn't mind reopening Brexit divisions - Keir Starmer's approach has provoked anger from the Conservatives and Reform UK.

Italian PM condemns ally Trump over 'unacceptable' Pope criticism - Italy's prime minister and the US president are close allies, but Trump has refused to apologise to the "very weak" Pope Leo XIV.

The Register

Attention, gamers: The FAA wants YOU to be an air traffic controller - GG noob, who cleared you to land? The Federal Aviation Administration continues to face an air traffic controller shortage, and it's hoping that a new demographic of potential applicants can fill the ranks: Video gamers. …

WARNING: Oracle's AI obsession could mean higher prices and worse support - Advisers say fewer staff could mean slower answers and tougher renewals Oracle customers have been warned to watch for changes in support and pricing as Larry Ellison’s company makes huge datacenter spending commitments to support its AI ambitions.…

Claude Code cache confusion as Anthropic tweaks defaults, but quotas still drain - Dev reports suggest long sessions now burn through usage much faster Anthropic last month reduced the TTL (time to live) for the Claude Code prompt cache from one hour to five minutes for many requests, but said this should not increase costs despite users reporting faster depleting quotas.…

Notepad sheds Copilot from toolbar as Microsoft gives subtlety a try - AI gubbins still there, just tucked under 'Writing Tools' Copilot is on its way out of Notepad, but a return to the basic text editor is not on the cards.…

Booking.com warns reservation data may have checked out with intruders - Travel giant says names, contact details, dates, and hotel messages potentially exposed Booking.com is warning customers that their reservation details may have been exposed to unknown attackers, in the latest reminder that the travel giant still can't quite keep a lid on the data flowing through its platform.…

New Scientist - Home

Chernobyl at 40: The man with the most dangerous job on Earth - Ever since the Chernobyl nuclear reactor exploded in 1986, scientists have needed to monitor radioactive conditions inside. That job currently falls to Anatoly Doroshenko, who explains the dangers and importance of his work to New Scientist

Collapse of key ocean current may release billions of tonnes of carbon - If the Atlantic Meridional Overturning Circulation shut down, the knock-on effects could release hundreds of billions of tonnes of CO2, raising global temperatures even further

Chernobyl at 40: The past, present and future of a nuclear disaster - Forty years ago, the catastrophic explosion at Chernobyl sent plumes of radioactive waste into the atmosphere. Now, New Scientist has gained exclusive access to learn how vital work to decontaminate the site has been derailed by Russia’s full-scale invasion of Ukraine

We urgently need to prepare for quantum computers breaking encryption - The maths problems that secure your online bank transactions and emails may soon be undermined by quantum technology. It’s imperative we act now, before it’s too late

The secret project to settle controversial maths proof with a computer - Working in secret for more than two years, a group of mathematicians has set out to resolve one of the longest and most bitter battles in modern mathematics

Hacker News

Nothing Ever Happens: Polymarket bot that always buys No on non-sports markets - Comments

The Future of Everything Is Lies, I Guess: Safety - Comments

Building a CLI for All of Cloudflare - Comments

Servo is now available on crates.io - Comments

Make Tmux Pretty and Usable (2024) - Comments

Slashdot

Booking.com Hit By Data Breach - Booking.com says hackers accessed customer reservation data in a breach that may have exposed booking details, names, email addresses, phone numbers, addresses, and messages shared with accommodations. PCMag reports: On Sunday, users reported receiving emails from Booking.com, warning them that "unauthorized third parties may have been able to access certain booking information associated with your reservation." The email suggests the hackers have already exploited customer information. "We recently noticed suspicious activity affecting a number of reservations, and we immediately took action to contain the issue," Booking.com wrote. "Based on the findings of our investigation to date, accessed information could include booking details and name(s), emails, addresses, phone numbers associated with the booking, and anything that you may have shared with the accommodation." Amsterdam-based Booking.com has now generated new PINs for customer reservations to prevent hackers from accessing them. Still, the incident risks exposing affected customers to potential phishing scams. The Australian Broadcasting Corporation and several Reddit users say they received scam messages from accounts posing as Booking.com. Read more of this story at Slashdot.

Mark Zuckerberg Is Reportedly Building an AI Clone To Replace Him In Meetings - According to the Financial Times, Meta is developing an AI avatar of Mark Zuckerberg that could interact with employees using his voice, image, mannerisms, and public statements, "so that employees might feel more connected to the founder through interactions with it." The Verge reports: Meta may start allowing creators to make AI avatars of themselves if the experiment with Zuckerberg succeeds, according to the Financial Times. [...] Zuckerberg is involved in training the AI avatar, the Financial Times reports, and has also started spending five to 10 hours per week coding on Meta's other AI projects and participating in technical reviews. Read more of this story at Slashdot.

Maine Set To Become First State With Data Center Ban - Maine is on track to become the first U.S. state to impose a temporary statewide ban on new data center construction. "Lawmakers in Maine greenlit the text of a bill this week to block data centers from being built in the state until November 2027," reports CNBC. "The measure, which is expected to get final passage in the next few days, also creates a council to suggest potential guardrails for data centers to ensure they don't lead to higher energy prices or other complications for Maine residents." From the report: Maine's bill has a few steps to go through before becoming law, notably whether Gov. Janet Mills will exercise her veto power. Mills asked lawmakers to include an exemption for several areas of the state where data center construction could continue. However, an amendment to do so was stuck down in the House, 29 to 115. Complicating Mills' decision is her campaign to become Maine's next senator. Mills is facing off against Graham Platner, an oyster farmer, in a high-profile Democratic primary. Platner is leading Mills in most recent polls by double digits. Read more of this story at Slashdot.

Californians Sue Over AI Tool That Records Doctor Visits - An anonymous reader quotes a report from Ars Technica: Several Californians sued Sutter Health and MemorialCare this week over allegations that an AI transcription tool was used to record them without their consent, in violation of state and federal law. The proposed class-action lawsuit, filed on Wednesday in federal court in San Francisco, states that, within the past six months, the plaintiffs received medical care at various Sutter and MemorialCare facilities. During those visits, medical staff used Abridge AI. According to the complaint, this system "captured and processed their confidential physician-patient communications. Plaintiffs did not receive clear notice that their medical conversations would be recorded by an artificial intelligence platform, transmitted outside the clinical setting, or processed through third-party systems." The complaint adds that these recordings "contained individually identifiable medical information, including but not limited to medical histories, symptoms, diagnoses, medications, treatment discussions, and other sensitive health disclosures communicated during confidential medical consultations." In recent years, Abridge's software and AI service have been rapidly deployed across major health care providers nationwide, including Kaiser Permanente, the Mayo Clinic, Duke Health, and many more. When activated, the software captures, transcribes, and summarizes conversations between patients and doctors, and it turns them into clinical notes. Sutter Health began partnering with Abridge two years ago. Sutter spokesperson Liz Madison said the company is aware of the lawsuit. "We take patient privacy seriously and are committed to protecting the security of our patients' information," Madison said. "Technology used in our clinical settings is carefully evaluated and implemented in accordance with applicable laws and regulations." Read more of this story at Slashdot.

Will Some Programmers Become 'AI Babysitters'? - Will some programmers become "AI babysitters"? asks long-time Slashdot readertheodp. They share some thoughts from a founding member of Code.org and former Director of Education at Google: "AI may allow anyone to generate code, but only a computer scientist can maintain a system," explained Google.org Global Head Maggie Johnson in a LinkedIn post. So "As AI-generated code becomes more accurate and ubiquitous, the role of the computer scientist shifts from author to technical auditor or expert. "While large language models can generate functional code in milliseconds, they lack the contextual judgment and specialized knowledge to ensure that the output is safe, efficient, and integrates correctly within a larger system without a person's oversight. [...] The human-in-the-loop must possess the technical depth to recognize when a piece of code is sub-optimal or dangerous in a production environment. [...] We need computer scientists to perform forensics, tracing the logic of an AI-generated module to identify logical fallacies or security loopholes. Modern CS education should prepare students to verify and secure these black-box outputs." The NY Times reports that companies are already struggling to find engineers to review the explosion of AI-written code. Read more of this story at Slashdot.