Meanwhile some Italian YouTuber was raided because some portable consoles already came with roms in their memory, they only go after individuals.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Is this how Disney becomes the owner of all of the AI companies too? Lol
I am holding my breath! Will they walk free, or get a $10 million fine and then keep doing what every other thieving, embezzling, looting, polluting, swindling, corrupting, tax evading mega-corporation have been doing for a century!
This is how corruption works - the fine is the cost of business. Being given only a fine of $10 million is such a win that they'll raise $10 billion in new investment on its back.
Well maybe they shouldn't have done of the largest violations of copyright and intellectual property ever.
Probably the largest single instance ever.
I feel like it can't even be close. What would even compete? I know I've gone a little overboard with my external hard drive, but I don't think even I'm to that level.
Good fuck those fuckers
We just need to show that ChatGPT and alike can generate Nintendo based content and let it fight out between them
They will probably just merge into another mega-golem controlled by one of the seven people who own the planet.
Mario, voiced by Chris Pratt, will become the new Siri, then the new persona for all AI.
In the future, all global affairs will be divided across the lines of Team Mario and Team Luigi. Then the final battle, then the end.
*dabs, mournfully*
Only 80% of it, the other 7 billion of us own anything from nothing to a few hundred square metres each.
I myself don't allow my data to be used for AI, so is anyone did, they do owe me a boatload of gold coins. That's just my price. Great tech though.
Now they're in the "finding out" phase of the "fucking around and finding out".
Fucking good!! Let the AI industry BURN!
What um, what court system do you think is going to make that happen? Cause the current one is owned by an extremely pro-AI administration. If anything gets appealed to SCOTUS they will rule for AI.
The people who literally own this planet have investigated the people who literally own this planet and found that they literally own this planet and what the FUCK are you going to do about it, bacteria of the planet?
^
What in the absolute fuck are you talking about?! Your comment is asinine, “bacteria of the planet” the fuck?! Do you have the same “worm in the brain” that RFK claims to have because you sound just as stupid as him?
You claim people “own” this planet… um… what in the absolute fuck? Yes, people with money have always push an agenda but “owning” it, is beyond the dumbest statement.
Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.
And yet, despite 20 years of experience, the only side Ashley presents is the technologists' side.
I hope LLMs and generative AI crash and burn.
I'm thinking, honestly, what if that's the planned purpose of this bubble.
I'm explaining - those "AI"'s involve assembling large datasets and making them available, poisoning the Web, and creating demand for for a specific kind of hardware.
When it bursts, not everything bursts.
Suddenly there will be plenty of no longer required hardware usable for normal ML applications like face recognition, voice recognition, text analysis to identify its author, combat drones with target selection, all kinds of stuff. It will be dirt cheap, compared to its current price, as it was with Sun hardware after the dotcom crash.
There still will be those datasets, that can be analyzed for plenty of purposes. Legal or not, they are already processed into usable and convenient state.
There will be the Web covered with a great wall of China tall layer of AI slop.
There will likely be a bankrupt nation which will have a lot of things failing due to that.
And there will still be all the centralized services. Suppose on that day you go search something in Google, and there's only the Google summary present, no results list (or maybe even a results list, whatever, but suddenly weighed differently), saying that you've been owned by domestic enemies yadda-yadda and the patriotic corporations are implementing a popular state of emergency or something like that. You go to Facebook, and when you write something there, your messages are premoderated by an AI so that you'd not be able to god forbid say something wrong. An LLM might not be able to support a decent enough conversation, but to edit out things you say, or PGP keys you send, in real time without anything appearing strange - easily. Or to change some real person's style of speech to yours.
Suppose all of not-degoogled Android installations start doing things like that, Amazon's logistics suddenly start working to support a putsch, Facebook and WhatsApp do what I described or just fail, Apple makes a presentation of a new, magnificent, ingenious, miraculous, patriotic change to a better system of government, maybe even with Johnny Ive as the speaker, and possibly does the same unnoticeable censorship, Microsoft pushes one malicious update 3 months earlier with a backdoor to all Windows installations doing the same, and commits its datacenters to the common effort, and let's just say it's possible that a similar thing is done by some Linux developer believing in an idea and some of the major distributions - don't need it doing much, just to provide a backdoor usable remotely.
I don't list Twitter because honestly it doesn't seem to work well enough or have coverage good enough.
So - this seems a pretty possible apocalypse scenario which does lead to a sudden installation of a dictatorial regime with all the necessary surveillance, planning, censorship and enforcement already being functioning systems.
So - of course apocalypse scenarios were a normal thing in movies for many years and many times, but it's funny how the more plausible such become, the less often they are described in art.
It's so very, very, deeply, fucking bleak. I can't sleep at night, because I see this clear as day, I feel like a jew in 1938's Berlin, only unlike that guy I can't get out, because this is global. There is literally nowhere to run.
Either society is going to crash and burn, or we will see global war, which will crash and burn society.
There is no escape, the writing is on the fucking wall.
An important note here, the judge has already ruled in this case that "using Plaintiffs' works "to train specific LLMs [was] justified as a fair use" because "[t]he technology at issue was among the most transformative many of us will see in our lifetimes." during the summary judgement order.
The plaintiffs are not suing Anthropic for infringing on their copyright, the court has already ruled that it was so obvious that they could not succeed with that argument that it could be dismissed. Their only remaining claim is that Anthropic downloaded the books from piracy sites using bittorrent
This isn't about LLMs anymore, it's a standard "You downloaded something on Bittorrent and made a company mad"-type case that has been going on since Napster.
Also, the headline is incredibly misleading. It's ascribing feelings to an entire industry based on a common legal filing that is not by itself noteworthy. Unless you really care about legal technicalities, you can stop here.
The actual news, the new factual thing that happened, is that the Consumer Technology Association and the Computer and Communications Industry Association filed an Amicus Brief, in an appeal of an issue that Anthropic the court ruled against.
This is pretty normal legal filing about legal technicalities. This isn't really newsworthy outside of, maybe, some people in the legal profession who are bored.
The issue was class certification.
Three people sued Anthropic. Instead of just suing Anthropic on behalf of themselves, they moved to be certified as class. That is to say that they wanted to sue on behalf of a larger group of people, in this case a "Pirated Books Class" of authors whose books Anthropic downloaded from the book piracy websites.
The judge ruled they can represent the class, Anthropic appealed the ruling. During this appeal an industry group filed an Amicus brief with arguments supporting Anthropic's argument. This is not uncommon, The Onion famously filed an Amicus brief with the Supreme Court when they were about to rule on issues of parody. Like everything The Onion writes, it's a good piece of satire: link
The site formatting broke it. Maybe it'll work as a link
Yup, seems to work
Thanks! That was a good read.
threatens to "financially ruin" the entire AI industry
No. Just the LLM industry and AI slop image and video generation industries. All of the legitimate uses of AI (drug discovery, finding solar panel improvements, self driving vehicles, etc) are all completely immune from this lawsuit, because they're not dependent on stealing other people's work.
As Anthropic argued, it now "faces hundreds of billions of dollars in potential damages liability at trial in four months" based on a class certification rushed at "warp speed" that involves "up to seven million potential claimants, whose works span a century of publishing history," each possibly triggering a $150,000 fine.
So you knew what stealing the copyrighted works could result in, and your defense is that you stole too much? That's not how that works.
Actually that usually is how it works. Unfortunately.
*Too big to fail" was probably made up by the big ones.
Let's go baby! The law is the law, and it applies to everybody
If the "genie doesn't go back in the bottle", make him pay for what he's stealing.
The law absolutely does not apply to everybody, and you are well aware of that.
Unfortunately, this will probably lead to nothing: in our world, only the poor seem to be punished for stealing. Well, corporations always get away with everything, so we sit on the couch and shout "YES!!!" for the fact that they are trying to console us with this.
Good. Burn it down. Bankrupt them.
If it's so "critical to national security" then nationalize it.
Oh no! Building a product with stolen data was a rotten idea after all. Well, at least the AI companies can use their fabulously genius PhD level LLMs to weasel their way out of all these lawsuits. Right?
People cheering for this have no idea of the consequence of their copyright-maximalist position.
If using images, text, etc to train a model is copyright infringement then there will NO open models because open source model creators could not possibly obtain all of the licensing for every piece of written or visual media in the Common Crawl dataset, which is what most of these things are trained on.
As it stands now, corporations don't have a monopoly on AI specifically because copyright doesn't apply to AI training. Everyone has access to Common Crawl and the other large, public, datasets made from crawling the public Internet and so anyone can train a model on their own without worrying about obtaining billions of different licenses from every single individual who has ever written a word or drawn a picture.
If there is a ruling that training violates copyright then the only entities that could possibly afford to train LLMs or diffusion models are companies that own a large amount of copyrighted materials. Sure, one company will lose a lot of money and/or be destroyed, but the legal president would be set so that it is impossible for anyone that doesn't have billions of dollars to train AI.
People are shortsightedly seeing this as a victory for artists or some other nonsense. It's not. This is a fight where large copyright holders (Disney and other large publishing companies) want to completely own the ability to train AI because they own most of the large stores of copyrighted material.
If the copyright holders win this then the open source training material, like Common Crawl, would be completely unusable to train models in the US/the West because any person who has ever posted anything to the Internet in the last 25 years could simply sue for copyright infringement.
Anybody can use copyrighted works under fair use for research, more so if your LLM model is open source (I would say this fair use should only actually apply if your model is open source...). You are wrong.
We don't need to break copyright rights that protect us from corporations in this case, or also incidentally protect open source and libre software.