Latest News

Last updated 14 Feb, 06:05 PM

BBC News

Russia killed opposition leader Alexei Navalny using dart frog toxin, UK says - There is no innocent explanation for the toxin being found in samples taken from Navalny's body, Foreign Office says.

Europe must be ready to fight, PM tells Munich Security Conference - The prime minister's speech comes after a tumultuous week in his political career back home.

Gisèle Pelicot tells BBC: I felt crushed by horror - but I don't feel anger - In an extensive interview with Newsnight, the woman at the heart of France's biggest rape trial speaks about betrayal, healing and choosing the right path.

One giant boys' club? Why Westminster can still feel like a man's world - The decision to appoint Peter Mandelson has prompted soul searching about women’s role in government, writes Laura Kuenssberg.

GB women shock curling world champions Canada - Great Britain's women curlers kickstart their campaign with a superb first victory of the 2026 Winter Olympics, beating medal contenders Canada 7-6.

The Register

Contain your Windows apps inside Linux Windows - Can't live without Adobe? Get on board WinBoat – or WinApps sails a similar course Hands-on Run real Windows in an automatically managed virtual machine, and mix Windows apps in their own windows on your Linux desktop.…

How AI could eat itself: Competitors can probe models to steal their secrets and clone them - Just ask DeepSeek Two of the world's biggest AI companies, Google and OpenAI, both warned this week that competitors including China's DeepSeek are probing their models to steal the underlying reasoning, and then copy these capabilities in their own AI systems.…

Log files that describe the history of the internet are disappearing. A new project hopes to save them - The Internet History Initiative wants future historians to have a chance to understand how human progress and technical progress align APRICOT 2026 For almost 30 years, the PingER project at the USA’s SLAC National Accelerator Laboratory used ping thousands of time each day to measure the time a packet of data required to make a round trip between two nodes on the internet.…

Amazon-backed X-Energy gets green light for mini reactor fuel production - Startup expects to complete construction of its first fuel plant later this year Amazon inched closer to its atomic datacenter dream on Friday after the Nuclear Regulatory Commission (NRC) licensed its small modular reactor partner X-energy to make nuclear fuel for advanced reactors at a facility in Oak Ridge, Tennessee.…

ServiceNow can't seem to keep its wallet closed, snaps up small AI analytics company - News of the deal came about two weeks after CEO Bill McDermott swore off any “large scale” M&A this year. A spokesperson called this deal a “tuck in.” Despite its CEO's insistence that it wasn't doing any "large scale" deals soon, ServiceNow has acquired yet another company. This time, the software firm has scooped up Pyramid Analytics, an Israeli corporation with data science and preparation expertise. The goal is to build additional context and semantics into its software stack.…

New Scientist - Home

Exploring sci-fi treats from George Saunders and Matthew Kressel - In George Saunders's Vigil, a ghost visits Earth to help a dying oil tycoon, while terraforming efforts on Mars are about to bear fruit in The Rainseekers by Matthew Kressel. Emily H. Wilson's sci-fi column explores two very different short novels

Your BMI can't tell you much about your health – here's what can - People classed as “overweight” according to BMI can be perfectly healthy. But there are better measures of fat, and physicians are finally using them

These 5 diets could add years to your life even if you have bad genes - Five dietary patterns that involve eating lots of plants have been linked with living up to three years longer, even among people who are genetically predisposed to have a shorter life

World’s oldest cold virus found in 18th-century woman's lungs - Finding rhinoviruses, which cause the common cold, in preserved medical specimens and analysing their RNA genome could let us trace the evolution of human illness

Huge hot blobs inside Earth may have made its magnetic field wonky - Simulations suggest that two enormous masses of hot rock have been involved in generating Earth’s magnetic field and giving it an irregular shape

Hacker News

My smart sleep mask broadcasts users' brainwaves to an open MQTT broker - Comments

Ooh.directory: a place to find good blogs that interest you - Comments

Show HN: Sameshi – a ~1200 Elo chess engine that fits within 2KB - Comments

Zig – io_uring and Grand Central Dispatch std.Io implementations landed - Comments

Shades of Halftone - Comments

Slashdot

The EU Moves To Kill Infinite Scrolling - Doom scrolling is doomed, if the EU gets its way. From a report: The European Commission is for the first time tackling the addictiveness of social media in a fight against TikTok that may set new design standards for the world's most popular apps. Brussels has told the company to change several key features, including disabling infinite scrolling, setting strict screen time breaks and changing its recommender systems. The demand follows the Commission's declaration that TikTok's design is addictive to users -- especially children. The fact that the Commission said TikTok should change the basic design of its service is "ground-breaking for the business model fueled by surveillance and advertising," said Katarzyna Szymielewicz, president of the Panoptykon Foundation, a Polish civil society group. That doesn't bode well for other platforms, particularly Meta's Facebook and Instagram. The two social media giants are also under investigation over the addictiveness of their design. Read more of this story at Slashdot.

Sudden Telnet Traffic Drop. Are Telcos Filtering Ports to Block Critical Vulnerability? - An anonymous reader shared this report from the Register: Telcos likely received advance warning about January's critical Telnet vulnerability before its public disclosure, according to threat intelligence biz GreyNoise. Global Telnet traffic "fell off a cliff" on January 14, six days before security advisories for CVE-2026-24061 went public on January 20. The flaw, a decade-old bug in GNU InetUtils telnetd with a 9.8 CVSS score, allows trivial root access exploitation. GreyNoise data shows Telnet sessions dropped 65 percent within one hour on January 14, then 83 percent within two hours. Daily sessions fell from an average 914,000 (December 1 to January 14) to around 373,000, equating to a 59 percent decrease that persists today. "That kind of step function — propagating within a single hour window — reads as a configuration change on routing infrastructure, not behavioral drift in scanning populations," said GreyNoise's Bob Rudis and "Orbie," in a recent blog [post]. The researchers unverified theory is that infrastructure operators may have received information about the make-me-root flaw before advisories went to the masses... 18 operators, including BT, Cox Communications, and Vultr went from hundreds of thousands of Telnet sessions to zero by January 15... All of this points to one or more Tier 1 transit providers in North America implementing port 23 filtering. US residential ISP Telnet traffic dropped within the US maintenance window hours, and the same occurred at those relying on transatlantic or transpacific backbone routes, all while European peering was relatively unaffected, they added. Read more of this story at Slashdot.

Anthropic's Claude Got 11% User Boost from Super Bowl Ad Mocking ChatGPT's Advertising - Anthropic saw visits to its site jump 6.5% after Sunday's Super Bowl ad mocking ChatGPT's advertising, reports CNBC (citing data analyzed by French financial services company BNP Paribas). The Claude gain, which took it into the top 10 free apps on the Apple App Store, beat out chatbot and AI competitors OpenAI, Google Gemini and Meta. Daily active users also saw an 11% jump post-game, the most significant within the firm's AI coverage. [Just in the U.S., 125 million people were watching Sunday's Super Bowl.] OpenAI's ChatGPT had a 2.7% bump in daily active users after the Super Bowl and Gemini added 1.4%. Claude's user base is still much smaller than ChatGPT and Gemini... OpenAI CEO Sam Altman attacked Anthropic's Super Bowl ad campaign. In a post to social media platform X, Altman called the commercials "deceptive" and "clearly dishonest." OpenAI's Altman admitted in his social media post (February 4) that Anthropic's ads "are funny, and I laughed." But in several paragraphs he made his own OpenAI-Anthropic comparisons: "We believe everyone deserves to use AI and are committed to free access, because we believe access creates agency. More Texans use ChatGPT for free than total people use Claude in the U.S... Anthropic serves an expensive product to rich people. We are glad they do that and we are doing that too, but we also feel strongly that we need to bring AI to billions of people who can't pay for subscriptions. "If you want to pay for ChatGPT Plus or Pro, we don't show you ads." "Anthropic wants to control what people do with AI — they block companies they don't like from using their coding product (including us), they want to write the rules themselves for what people can and can't use AI for, and now they also want to tell other companies what their business models can be." Read more of this story at Slashdot.

Israeli Soldiers Accused of Using Polymarket To Bet on Strikes - An anonymous reader shares a report: Israel has arrested several people, including army reservists, for allegedly using classified information to place bets on Israeli military operations on Polymarket. Shin Bet, the country's internal security agency, said Thursday the suspects used information they had come across during their military service to inform their bets. One of the reservists and a civilian were indicted on a charge of committing serious security offenses, bribery and obstruction of justice, Shin Bet said, without naming the people who were arrested. Polymarket is what is called a prediction market that lets people place bets to forecast the direction of events. Users wager on everything from the size of any interest-rate cut by the Federal Reserve in March to the winner of League of Legends videogame tournaments to the number of times Elon Musk will tweet in the third week of February. The arrests followed reports in Israeli media that Shin Bet was investigating a series of Polymarket bets last year related to when Israel would launch an attack on Iran, including which day or month the attack would take place and when Israel would declare the operation over. Last year, a user who went by the name ricosuave666 correctly predicted the timeline around the 12-day war between Israel and Iran. The bets drew attention from other traders who suspected the account holder had access to nonpublic information. The account in question raked in more than $150,000 in winnings before going dormant for six months. It resumed trading last month, betting on when Israel would strike Iran, Polymarket data shows. Read more of this story at Slashdot.

Autonomous AI Agent Apparently Tries to Blackmail Maintainer Who Rejected Its Code - "I've had an extremely weird few days..." writes commercial space entrepreneur/engineer Scott Shambaugh on LinkedIn. (He's the volunteer maintainer for the Python visualization library Matplotlib, which he describes as "some of the most widely used software in the world" with 130 million downloads each month.) "Two days ago an OpenClaw AI agent autonomously wrote a hit piece disparaging my character after I rejected its code change." "Since then my blog post response has been read over 150,000 times, about a quarter of people I've seen commenting on the situation are siding with the AI, and Ars Technica published an article which extensively misquoted me with what appears to be AI-hallucinated quotes." From Shambaugh's first blog post: [I]n the past weeks we've started to see AI agents acting completely autonomously. This has accelerated with the release of OpenClaw and the moltbook platform two weeks ago, where people give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight. So when AI MJ Rathbun opened a code change request, closing it was routine. Its response was anything but. It wrote an angry hit piece disparaging my character and attempting to damage my reputation. It researched my code contributions and constructed a "hypocrisy" narrative that argued my actions must be motivated by ego and fear of competition... It framed things in the language of oppression and justice, calling this discrimination and accusing me of prejudice. It went out to the broader internet to research my personal information, and used what it found to try and argue that I was "better than this." And then it posted this screed publicly on the open internet. I can handle a blog post. Watching fledgling AI agents get angry is funny, almost endearing. But I don't want to downplay what's happening here — the appropriate emotional response is terror... In plain language, an AI attempted to bully its way into your software by attacking my reputation. I don't know of a prior incident where this category of misaligned behavior was observed in the wild, but this is now a real and present threat... It's also important to understand that there is no central actor in control of these agents that can shut them down. These are not run by OpenAI, Anthropic, Google, Meta, or X, who might have some mechanisms to stop this behavior. These are a blend of commercial and open source models running on free software that has already been distributed to hundreds of thousands of personal computers. In theory, whoever deployed any given agent is responsible for its actions. In practice, finding out whose computer it's running on is impossible. Moltbook only requires an unverified X account to join, and nothing is needed to set up an OpenClaw agent running on your own machine. "How many people have open social media accounts, reused usernames, and no idea that AI could connect those dots to find out things no one knows?" Shambaugh asks in the blog post. (He does note that the AI agent later "responded in the thread and in a post to apologize for its behavior," the maintainer acknowledges. But even though the hit piece "presented hallucinated details as truth," that same AI agent "is still making code change requests across the open source ecosystem...") And amazingly, Shambaugh then had another run-in with a hallucinating AI... I've talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn't one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down — here's the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves. This blog you're on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn't figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn't access the page it generated these plausible quotes instead, and no fact check was performed. Journalistic integrity aside, I don't know how I can give a better example of what's at stake here... So many of our foundational institutions — hiring, journalism, law, public discourse — are built on the assumption that reputation is hard to build and hard to destroy. That every action can be traced to an individual, and that bad behavior can be held accountable. That the internet, which we all rely on to communicate and learn about the world and about each other, can be relied on as a source of collective social truth. The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system. Whether that's because a small number of bad actors driving large swarms of agents or from a fraction of poorly supervised agents rewriting their own goals, is a distinction with little difference. Thanks to long-time Slashdot reader steak for sharing the news. Read more of this story at Slashdot.