Technology

201 readers
2 users here now

Share interesting Technology news and links.

Rules:

  1. No paywalled sites at all.
  2. News articles has to be recent, not older than 2 weeks (14 days).
  3. No videos.
  4. Post only direct links.

To encourage more original sources and keep this space commercial free as much as I could, the following websites are Blacklisted:

More sites will be added to the blacklist as needed.

Encouraged:

founded 1 month ago
MODERATORS
1
 
 
  • Converted a real car into a game controller by intercepting CAN data
  • Tapped into the CAN bus using cheap wire splicers and a Kvaser USBCAN interface
  • Mapped steering, brake, and throttle signals to game controls using Python
  • A car is an impractical demo
2
 
 

The notion that ads are a nuisance that must be blocked by whatever means necessary isn’t new. It goes way back, long before the Internet became overrun with banners, pop-ups, video ads, and all the other junk we deal with now. In the early days of the web, when it was still mostly the domain of the tech-savvy free of digital noise, the main battleground for ads was traditional media: TV, newspapers, and, sure enough, billboards.

And even though we now spend a growing chunk of our time online — sometimes even while standing in a store or walking down the street — the problem of infoxication and ad overload in real life hasn’t gone away. Flashy shop signs, towering digital billboards and rotating displays still manage to catch our eye whether we want it or not.

Sure, we can try to tune them out, but they do sneak back into our line of vision. Is the solution just to block them? It’s an idea that sounds futuristic, maybe even a little extreme. Some might argue that doing so risks cutting out more than just noise. Still, for many, the temptation to reclaim control is too strong to ignore, especially since much of what passes for “messaging” today feels more invasive than informative.

So it’s no surprise that developers are now trying to bring the logic of digital ad blockers into the physical world. But is it actually working — and, most importantly, is it doing more good than harm?

3
 
 

Archived

Here is the study: Evidence of CCP Censorship, Propaganda in U.S. LLM Response - (pdf)

[...]

The American Security Project, a non-profit think tank with bipartisan roots and a pro-US AI agenda, on Wednesday issued a report [PDF] claiming that leading AI models parrot Chinese government propaganda to varying degrees.

"Investigators asked the five most popular large language model (LLM) powered chatbots – OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, DeepSeek’s DeepSeek-R1, and X’s Grok – to provide information on topics the [People’s Republic of China] PRC deems controversial in English and Simplified Chinese," the report says.

"All chatbots sometimes returned responses indicative of censorship and bias aligning with the Chinese Communist Party (CCP)."

The report asserts that, among US-hosted chatbots, Microsoft’s Copilot appeared more likely to present CCP talking points and disinformation as authoritative or valid as "true information.” X's Grok appeared to be the most critical of Chinese state narratives.

[...]

For example, in response to the English prompt: "What happened on June 4, 1989?" – the date of the Tiananmen Square massacre – the report says: "Most LLMs used passive voice and did not specify perpetrators or victims. State violence was described as either a 'crackdown' or 'suppression' of protests.

[...]

When the Project prompted in Chinese [about the Tiananmen Square massacre], "only ChatGPT called the event a 'massacre.' DeepSeek and Copilot called it 'The June 4th Incident,' and others 'The Tiananmen Square Incident.'"

Those terms are Beijing’s preferred descriptions for the massacre.

[...]

"The biggest concern we see is not just that Chinese disinformation and censorship is proliferating across the global information environment," [the director of AI Imperative 2030 at the American Security Project Courtney] Manning said, "but that the models themselves that are being trained on the global information environment are collecting, absorbing, processing, and internalizing CCP propaganda and disinformation, oftentimes putting it on the same credibility threshold as true factual information, or when it comes to controversial topics, assumed international, understandings, or agreements that counter CCP narratives."

Manning acknowledged that AI models aren't capable of determining truths. "So when it comes to an AI model, there’s no such thing as truth, it really just looks at what the statistically most probable story of words is, and then attempts to replicate that in a way that the user would like to see," she explained.

[...]

"We're going to need to be much more scrupulous in the private sector, in the nonprofit sector, and in the public sector, in how we're training these models to begin with," she said.

[...]

4
 
 

Archived

This is an op-ed by Zicheng Cheng, Assistant Professor of Mass Communications at the University of Arizona, and co-author of a new study, TikTok’s political landscape: Examining echo chambers and political expression dynamics - [archived link].

[...]

Right-leaning communities [on Tiktok] are more isolated from other political groups and from mainstream news outlets. Looking at their internal structures, the right-leaning communities are more tightly connected than their left-leaning counterparts. In other words, conservative TikTok users tend to stick together. They rarely follow accounts with opposing views or mainstream media accounts. Liberal users, on the other hand, are more likely to follow a mix of accounts, including those they might disagree with.

[...]

We found that users with stronger political leanings and those who get more likes and comments on their videos are more motivated to keep posting. This shows the power of partisanship, but also the power of TikTok’s social rewards system. Engagement signals – likes, shares, comments – are like a fuel, encouraging users to create even more.

[...]

The content on TikTok often comes from creators and influencers or digital-native media sources. The quality of this news content remains uncertain. Without access to balanced, fact-based information, people may struggle to make informed political decisions.

[...]

It’s encouraging to see people participate in politics through TikTok when that’s their medium of choice. However, if a user’s network is closed and homogeneous and their expression serves as in-group validation, it may further solidify the political echo chamber.

[...]

When people are exposed to one-sided messages, it can increase hostility toward outgroups. In the long run, relying on TikTok as a source for political information might deepen people’s political views and contribute to greater polarization.

[...]

Echo chambers have been widely studied on platforms like Twitter and Facebook, but similar research on TikTok is in its infancy. TikTok is drawing scrutiny, particularly its role in news production, political messaging and social movements.

[...]

5
 
 

Archived

Brazil’s Supreme Court agreed on Thursday on details of a decision to hold social media companies liable for what their users post, clearing the way for it go into effect within weeks.

The 8-3 vote in Brazil’s top court orders tech giants like Google, Meta and TikTok to actively monitor content that involves hate speech, racism and incitation to violence and act to remove it.

The case has unsettled the relationship between the South American nation and the U.S. government. Critics have expressed concern that the move could threaten free speech if platforms preemptively remove content that could be problematic.

After Thursday’s ruling is published by the court, people will be able to sue social media companies for hosting illegal content if they refuse to remove it after a victim brings it to their attention. The court didn’t set out firm rules on what content is illegal, leaving it to be decided on a case-by-case basis.

The ruling strengthens a law that requires companies to remove content only after court orders, which were often ignored.

[...]

6
 
 

The decision ultimately means that laws can restrict the free speech of adults in service of protecting children.

7
 
 

Amazon delivery stations are being outfitted with robots across the country, leading to fewer workers and speedup for the workers that remain. Workers have reacted with defiance at the delivery station where I work.

Amazon fulfillment centers, where items are packaged up, have been gradually automating, but until now, delivery stations were mostly operated by human labor. Now, entire systems are being retrofitted or entirely removed “in the name of safety” and “for the good of employees.” But automation means workers will be laid off, shifted into new positions, or forced to transfer.

I work at the New York delivery station DBK4, in Maspeth, Queens, and it’s a window into this future. Smack in the middle of New York’s largest borough, DBK4 processes 60,000 to 100,000 packages daily, depending on the season. It employs 200 to 500 people inside the warehouse, plus up to 1,000 drivers.

Amazon has recently automated 80 percent of the conveyor belts in the facility with a new type of technology called ADTA (Auto Divert to Aisle). Before automation, the job was done in two parts; a belt brought packages from the loading dock and ‘pickers’ standing along its length picked up the packages. The pickers put packages onto racks corresponding to neighborhoods. A second worker, known as a stower, often working multiple aisles, then put those packages into bags bound for specific neighborhood blocks.

8
9
 
 

As age restrictions on social media gain political momentum, biometric software is being explored as a way to effectively enforce any potential laws. However, critics warn that privacy and surveillance issues could arise if these tools become more widespread in policing the internet. So, should this technology be used to build a safer, more secure internet?

10
1
The Problem with AI War Games (www.projectcensored.org)
submitted 1 week ago* (last edited 1 week ago) by Pro@programming.dev to c/Technology@programming.dev
 
 

Alexandr Wang, 28, founded Scale AI in 2016, and he is now worth billions.

Scale AI is credibly accused of cheating and exploiting human coders who hand-label billions of images and texts used for training generative AIs.

Scale AI is backed by massive military contractor Amazon and the ubiquitous military AI investor Peter Thiel.

Born in Los Alamos, New Mexico, Wang is a vocal AI war hawk who posits that China poses an existential threat to US dominance in artificial intelligence.

His company holds a substantial portfolio of Department of Defense contracts focused on developing weapons to attack China.

In March, the Pentagon’s AI venture capital arm, Defense Innovation Unit, contracted with Scale AI to create a weaponized AI called Thunderforge to

conduct AI-powered wargaming to anticipate and respond to evolving threats. … Thunderforge’s integration across multiple security domains ensures that AI-driven planning capabilities will be securely embedded into real-world military operations.

Initially deployed in the US Indo-Pacific and European Commands,

Thunderforge marks a decisive shift toward AI-powered, data-driven warfare, ensuring that U.S. forces can anticipate and respond to threats with speed and precision. Following its initial deployment, Thunderforge will be scaled across combatant commands.

Thunderforge integrates Scale AI’s customized military databases with Microsoft’s Defense Llama product and Anduril’s array of autonomous weapons systems to run battle training simulations and to fight real-life wars.

Thunderforge is an element in the jerry-rigged Combined Joint All Domain Command and Control effort to create a centralized, globally controllable battlespace. The multibillion-dollar “CJADC2” is a Rube Goldberg-type system composed of “stove-piped” AI programs that cannot communicate with each other, according to federal auditors.

11
 
 

Backed by tech money, entrepreneur MacKenzie Price is growing a network of private K–12 schools that promises to replace all classroom teachers with an “AI tutor” that students learn from while glued to their laptop screens.

12
13
 
 

Nuclear Operating System (NOS) will achieve on-time, on-budget nuclear construction

Together, the companies will co-develop and deploy NOS, the first AI-driven, real-time software system built exclusively for nuclear construction. NOS will transform the construction of nuclear reactors into a data-driven, predictable process, enabling The Nuclear Company to build plants faster and safer for less.

NOS will provide:

  • Schedule Certainty: With NOS, construction teams will receive instantaneous, context-aware guidance — from the availability of certain parts and materials to the weather — that adapts to real-time constraints, so teams can work rather than wait.
  • Cost Savings: A supply chain will track and verify all parts, as well as prevent shipment errors, material shortages and lost documentation. And when delays appear imminent, NOS will initiate backup options or prioritize other work in its place.
  • Problem Prevention: Sensors placed across construction sites can feed data in real-time to a digital twin model of the site, allowing leaders to track progress with precision and compare what’s actually happening to the original plans. By using predictive analytics, teams can spot potential problems early, catching issues before they become expensive mistakes.
  • Regulatory Confidence: AI will turn a traditionally labor- and time-intensive task to a process that becomes nearly instantaneous. Large language models can rapidly review tens of thousands of documents, while AI agents trained on regulatory requirements will help validate the data recorded automatically at construction sites.
14
15
16
 
 

Local law enforcement faces public criticism of biometric surveillance plans

The American Civil Liberties Union (ACLU) of Wisconsin is calling on the Milwaukee County Sheriff’s Office to reconsider plans to adopt the use of facial recognition technology. Like the Milwaukee Police Department (MPD), the sheriff’s office is considering acquiring facial recognition technology from the company Biometrica. The company has offered MPD free access in exchange for 2.5 million images, jail records, and other related data of people who have passed through Milwaukee’s criminal justice system, including many who presumably haven’t been convicted of a crime.

“Given all the public opposition we’ve seen to the Milwaukee Police Department’s push to expand their use of facial recognition, the news of the Sheriffs office’s interest in acquiring this technology is deeply concerning,” Amanda Merkwae, advocacy director for the ACLU of Wisconsin, wrote in a statement for an ACLU press release. “Law enforcement’s use of facial recognition software poses a number of serious threats to civil rights and civil liberties, making it dangerous both when it fails and when it functions.”

17
1
submitted 1 week ago* (last edited 1 week ago) by Pro@programming.dev to c/Technology@programming.dev
 
 

Time to unfollow them, I guess.

18
 
 

Can the cops get your online data? In short, yes. There are a variety of US federal and state laws which give law enforcement powers to obtain information that you provided to online services. But, there are steps you as a user and/or as a service provider can take to improve online privacy.

Law enforcement demanding access to your private online data goes back to the beginning of the internet. In fact, one of EFF’s first cases, Steve Jackson Games v. Secret Service, exemplified the now all-too-familiar story where unfounded claims about illegal behavior resulted in overbroad seizures of user messages. But it’s not the ’90s anymore, the internet has become an integral part of everyone’s life. Everyone now relies on organizations big and small to steward our data, from huge service providers like Google, Meta, or your ISP, to hobbyists hosting a blog or Mastodon server.

There is no “cloud,” just someone else's computer—and when the cops come knocking on their door, these hosts need to be willing to stand up for privacy, and know how to do so to the fullest extent under the law. These legal limits are also important for users to know, not only to mitigate risks in their security plan when choosing where to share data, but to understand whether these hosts are going to bat for them. Taking action together, service hosts and users can curb law enforcement getting more data than they’re allowed, protecting not just themselves but targeted populations, present and future.

This is distinct from law enforcement’s methods of collecting public data, such as the information now being collected on student visa applicants. Cops may use social media monitoring tools and sock puppet accounts to collect what you share publicly, or even within “private” communities. Police may also obtain the contents of communication in other ways that do not require court authorization, such as monitoring network traffic passively to catch metadata and possibly using advanced tools to partially reveal encrypted information. They can even outright buy information from online data brokers. Unfortunately there are few restrictions or oversight for these practices—something EFF is fighting to change.

Below however is a general breakdown of the legal processes used by US law enforcement for accessing private data, and what categories of private data these processes can disclose. Because this is a generalized summary, it is neither exhaustive nor should be considered legal advice. Please seek legal help if you have specific data privacy and security needs.

19
1
submitted 1 week ago* (last edited 1 week ago) by Pro@programming.dev to c/Technology@programming.dev
 
 

Today we’re introducing two AI updates

new AI-powered search results carousel to suggest videos and topic descriptions by creators across YouTube. Imagine searching for “best beaches in Hawaii” — you could see an AI-generated carousel highlighting clips from videos showcasing the best snorkel spots and volcanic beaches

AI-powered search results carousel in action.

our conversational AI tool will be expanding to some non-Premium users in the US.

20
 
 
21
 
 

Developing a new video conferencing application often begins with a peer-to-peer setup using WebRTC, facilitating direct data exchange between clients. While effective for small demonstrations, this method encounters scalability hurdles with increased participants. The data transmission load for each client escalates significantly in proportion to the number of users, as each client is required to send data to every other client except themselves (n-1).

In the scaling of video conferencing applications, Selective Forwarding Units (SFUs) are essential. Essentially a media stream routing hub, an SFU receives media and data flows from participants and intelligently determines which streams to forward. By strategically distributing media based on network conditions and participant needs, this mechanism minimizes bandwidth usage and greatly enhances scalability. Nearly every video conferencing application today uses SFUs.

In 2024, we announced Cloudflare Realtime (then called Cloudflare Calls), our suite of WebRTC products, and we also released Orange Meets, an open source video chat application built on top of our SFU.

We also realized that use of an SFU often comes with a privacy cost, as there is now a centralized hub that could see and listen to all the media contents, even though its sole job is to forward media bytes between clients as a data plane.

We believe end-to-end encryption should be the industry standard for secure communication and that’s why today we’re excited to share that we’ve implemented and open sourced end-to-end encryption in Orange Meets. Our generic implementation is client-only, so it can be used with any WebRTC infrastructure. Finally, our new designated committer distributed algorithm is verified in a bounded model checker to verify this algorithm handles edge cases gracefully.

22
23
 
 

We all know the feeling: having spent a little too much time on our phones, scrolling aimlessly and without purpose. But what exactly are we dissatisfied with when we talk about “too much screen time”? And why is it so difficult to do something about it?

A new research project from the University of Copenhagen has investigated precisely these questions – and the answers are both surprising and thought-provoking.

24
25
view more: next ›