this post was submitted on 10 Jul 2023
350 points (91.9% liked)

Technology

63236 readers
3634 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Which of the following sounds more reasonable?

  • I shouldn't have to pay for the content that I use to tune my LLM model and algorithm.

  • We shouldn't have to pay for the content we use to train and teach an AI.

By calling it AI, the corporations are able to advocate for a position that's blatantly pro corporate and anti writer/artist, and trick people into supporting it under the guise of a technological development.

top 50 comments
sorted by: hot top controversial new old
[–] pensivepangolin@lemmy.world 96 points 2 years ago (5 children)

I think it’s the same reason the CEO’s of these corporations are clamoring about their own products being doomsday devices: it gives them massive power over crafting regulatory policy, thus letting them make sure it’s favorable to their business interests.

Even more frustrating when you realize, and feel free to correct me if I’m wrong, these new “AI” programs and LLMs aren’t really novel in terms of theoretical approach: the real revolution is the amount of computing power and data to throw at them.

[–] assassin_aragorn@lemmy.world 58 points 2 years ago (6 children)

The funniest thing I've seen on this is the ChatGPT CEO, Altman, talking about how he's a bit afraid of what they've created and how it needs limitations -- and then when the EU begins to look at regulations, he immediately rejects the concept, to the point of threatening to leave the European market. It's incredibly transparent what they're doing.

Unfortunately I don't know enough about the technology to say if the algorithms and concepts themselves are novel, but without a doubt they couldn't exist without modern computing power capabilities.

[–] FancyGUI@lemmy.fancywhale.ca 20 points 2 years ago* (last edited 2 years ago) (5 children)

I can tell for a fact that there's nothing new going on. Only the MASSIVE investment from Microsoft to allow them to train on an insane amount of data. I am no "expert" per se, but I've been studying and working with AI for over a decade - so feel free to judge my reply as you please

load more comments (5 replies)
[–] Peruvian_Skies@kbin.social 10 points 2 years ago (1 children)

The concepts themselves are some 30 years old, but storage capacity and processing speed have only recently reached a point where generative AI outperforms competing solutions.

But regarding the regulation thing, I don't know what was said or proposed, and this is just me playing devil's advocate: but could it be that the CEO simply doesn't agree with the specifics of the proposed regulations while still believing that some other, different kind of regulation should exist?

[–] rainh@kbin.social 15 points 2 years ago (1 children)

Certainly could be, but probably an optimistic take. Most likely they're just trying to do what corporations have been doing for ages, which is to weaponize government policy to prevent competition. They don't want restrictions that will materially impact their product, they want restrictions that will materially impact startups to make it more difficult for them to intrude on the established space.

[–] jumperalex@lemmy.world 7 points 2 years ago

I think if you fed your response into ChatGPT and asked it to summarize in two words it would return,

"Regulatory Capture"

load more comments (4 replies)
[–] ywein@lemmy.ml 8 points 2 years ago

LLMs are pretty novel. They are made possible by invention of the Transformer model, that operates significantly different compared to, say, RNN.

[–] assassinatedbyCIA@lemmy.world 6 points 2 years ago (1 children)

It also plays into the hype cycle they’re trying to create. Saying you’ve made an AI is more likely to capture the attention of the masses then saying you have a LLM. Ditto that point for the existential doomerism that they ceo’s have. Saying your tech is so powerful that it might lead to humanity’s extinction does wonders in building hype.

[–] pensivepangolin@lemmy.world 4 points 2 years ago

Agreed. And all you really need to do is browse any of the headlines from even respectable news outlets to see how well it’s working. It’s just article after article uncritically parroting whatever claims these CEO’s make at face value at least 50% of the time. It’s mind-numbing.

[–] Phantom_Engineer@lemmy.ml 3 points 2 years ago

The fear mongering is pretty ridiculous.

"AI could DESTROY HUMANITY. It's like the ATOMIC BOMB! Look at it's RAW POWER!"

AI generates an image of cats playing canasta.

"By God...."

load more comments (1 replies)
[–] Iceblade02@lemmy.world 34 points 2 years ago (2 children)

IMO content created by either AI or LLMs should have a special license and be considered AI public domain (unless they can prove that they own all content the AI was trained on). Commercial content made based on content marked with this license would be subject to a flat % tax that should be applied to the product price which would be earmarked for a fund distributing to human creators (coders, writers, musicians etc.).

[–] Trainguyrom@reddthat.com 12 points 2 years ago (2 children)

I think the cleaner (and most likely) outcome is AI generated work is considered public domain, and since public domain content can already be edited and combined and arranged to create copyrighted content this would largely clear up the path for creators to use AI more prominently in their workflows

load more comments (2 replies)
[–] kklusz@lemmy.world 2 points 2 years ago (2 children)

What about LLM generated content that was then edited by a human? Surely authors shouldn't lose copyright over an entire book just because they enlisted the help of LLMs for the first draft.

load more comments (2 replies)
[–] Chocrates@lemmy.world 30 points 2 years ago (1 children)

both sound the same to me IMO. Private companies scraping ostensibly public data to sell it. No matter how you word it they are trying to monetize stuff that is out in the open.

[–] Dran_Arcana@lemmy.world 6 points 2 years ago* (last edited 2 years ago) (1 children)

I don't see why a single human should be able to profit off learning from others but a group of humans doing it for a company cannot. This is just how humanity advances at whatever scale.

[–] Chocrates@lemmy.world 9 points 2 years ago (1 children)

I had a comment about the morality of it at first but I pulled it out. This is not an easy question to answer. Corporations gate keeping knowledge seems weird and dystopian but the knowledge is out there and they are just making connections between it. It also touches on copyright and fair use.

[–] Dran_Arcana@lemmy.world 3 points 2 years ago

I agree it's much more complicated an issue than most people give it credit.

[–] Fylkir@lemmy.sdf.org 25 points 2 years ago (1 children)

I see it like this:

Our legal system has the concept of mechanical licensing. If your song exists, someone can demand the right to cover it and the law will favor them. The result of an LLM has less to do with your art that a cover of your song does.

There are plenty of cases of a cover eclipsing the original version of a song in popularity and yet I have never met a single person argue that we should get rid of the right to cover a song.

[–] nosycat@forum.fail 15 points 2 years ago

Sure, you have the legal right to cover someone else's song without asking permission first, but you still have to pay them royalties afterwards, at fair market rates.

[–] itsnotlupus@lemmy.world 23 points 2 years ago (2 children)

I'll note that there are plenty of models out there that aren't LLMs and that are also being trained on large datasets gathered from public sources.

Image generation models, music generation models, etc.
Heck, it doesn't even need to be about generation. Music recognition and image recognition models can also be trained on the same sort of datasets, and arguably come with similar IP right questions.

It's definitely a broader topic than just LLMs, and attempting to enumerate exhaustively the flavors of AIs/models/whatever that should be part of this discussion is fairly futile given the fast evolving nature of the field.

[–] themarty27@lemmy.sdf.org 4 points 2 years ago* (last edited 2 years ago) (2 children)

Still, all those models are, even conceptually, far removed frow AI. They would most properly be called Machine Learning Models (MLMs).

[–] itsnotlupus@lemmy.world 5 points 2 years ago

The term AI was coined many decades ago to encompass a broad set of difficult problems, many of which have become less difficult over time.

There's a natural temptation to remove solved problems from the set of AI problems, so playing chess is no longer AI, diagnosing diseases through a set of expert system rules is no longer AI, processing natural language is no longer AI, and maybe training and using large models is no longer AI nowadays.

Maybe we do this because we view intelligence as a fundamentally magical property, and anything that has been fully described has necessarily lost all its magic in the process.
But that means that "AI" can never be used to label anything that actually exists, only to gesture broadly at the horizon of what might come.

load more comments (1 replies)
[–] assassin_aragorn@lemmy.world 2 points 2 years ago

Fair enough.

[–] aezart@lemmy.world 21 points 2 years ago (1 children)

If an LLM was trained on a single page of GPL code or a single piece of CC-BY art, the entire set of model weights and any outputs from the model must be licensed the same way. Otherwise this whole thing is just blatant license laundering.

[–] paperbenni@lemmy.world 11 points 2 years ago

This depends on how transformative the act of encoding the data in an LLM is. If you have overfitting out the ass and the model can recite its training material verbatim then it's an illegal copy of the training material. If the model can only output content that would be considered transformative if a human with knowledge of the training data created it, then so is the model.

[–] rarely@sh.itjust.works 15 points 2 years ago

In fairness, AI is a buzzword that came out well before LLMs. It's used to mean "tHe cOmpUtER cAn tHink!". We play against "AI" in games all the time, but they arent AI as we know it today.

ML (machine learning) is a more accurate descriptor but blah doesn't have the same pizzazz as AI does.

The larger issue is that innovation is sometimes done for innovation's sake. Profits gets mixed up there and a board has to show profits to shareholders and then you get VCs trying to "productize" and monetize everything.

What's more is there are only a handful of players in the AI space, but because they are giving API access to other companies, those companies are building more and more sketchy uses of that tech.

It wouldn't be a huge deal if LLMs trained on copywritten material and then gave the service away for free. As it stands, some LLMs are churning out work that could be protected under copywrite law by humans (AI work can't be copywritten under US law), and turning a profit.

I don't think "it was AI" will hold up in court though. May need to do some more innovation.

Also there are some LLMs being trained on public domain info, to avoid copywrite problems. But works go into the public domain after 70 years past the copywrite holder's death (disney being the biggest extender of that rule), so your AI will be a tad out dated in it's "knowledge".

[–] QHC@lemmy.world 14 points 2 years ago

I think you are likely right, but it's more general than just about training costs. The term "AI" carries a ton of baggage, both good and bad.

To some extent, I think we also keep pushing back the boundary of what we consider "intelligence" as we learn more and better understand what we're creating. I wonder if every future tech generation will continue this cycle until/unless humanity actually does create a general artificial intelligence--every iteration getting slightly closer but still falling short of "true" AI, then being looked at as a disappointment and not worthy of the term anymore. Rinse and repeat.

[–] Zeth0s@lemmy.world 13 points 2 years ago (1 children)

That's absolutely not correct. AI is a field of computer science/scientific computing built on the idea that some capabilities of biological intelligences could be simulated or even reproduced "in silicon", i.e. by using computers.

Nowadays is an extremely broad term that covers a lot of computational methodologies. LLM in particular are a evolution of methods born to simulate and act as human neural network. Nowadays they work very differently, but they still provide great insights on how an "artificial" intellicenge can be built. It is only one small corner of what will be a real general artificial intelligence, and a small step in that direction.

AI as a name is absolutely unrelated with how programs based on the methodologies are built.

Human intelligences are in charge of all copyright part. AI and copyright are orthogonal, people are those who cannot tell the 2 and keep talking about AI.

There is AI, and there is copyright, it is time for all of us to properly frame the discussion on "copyright discussion related to 's product"

[–] assassin_aragorn@lemmy.world 3 points 2 years ago (1 children)

What I'm getting at moreso is that comparisons to humans for purposes of copyright law (e.g. likening it to students learning in school or reading library books) don't hold water just because it's called an AI. I don't see that as an actual defence for these companies, and it seems to be somewhat prevalent.

[–] Zeth0s@lemmy.world 2 points 2 years ago* (last edited 2 years ago) (2 children)

You can absolutely compare AI with students. The problem is that, legally, in many western countries students still have to pay copyright holders of the books they use to learn.

It is purely a copyright discussion. How far copyright applies? Shall the law distinguish between human learning and machine learning? Can we retroactively change copyright of material available online?

For instance, copilot is more at risk than a LLM that learned from 4chan, because licenses are clearer there. Problem is that we have no idea on which data big llms were trained, to know if some copyright law already applies.

At the end it is just a legal dispute on companies making money out of AI trained on data publicly available (but not necessarily copyright free).

load more comments (2 replies)
[–] bioemerl@kbin.social 11 points 2 years ago

Both of those statements are reasonable. You shouldn't have to pay to utilize anything you scrape from the internet, so long as you don't violate copyright by redistributing it

[–] BURN@lemmy.world 11 points 2 years ago (1 children)

AI has been a blanket term for Machine Learning, LLMs, Decision Trees and every other form of “intelligence”.

Unfortunately I think that genie is out of the bottle and it’s never going back in.

[–] stewsters@lemmy.world 1 points 2 years ago

And it has been the technical term used in academia since the 1950's. If anyone is surprised by this usage then they have not studied it, only watched movies.

[–] baduhai@sopuli.xyz 9 points 2 years ago (2 children)

I use to tune my LLM model

Large Language Model model

[–] some_guy@lemmy.sdf.org 7 points 2 years ago (2 children)

Automated Teller Machine machine.

[–] dragontamer@lemmy.world 6 points 2 years ago

Chai Tea? Chai means tea bro. Do you want coffee coffee with your cream cream?

[–] Dran_Arcana@lemmy.world 3 points 2 years ago (1 children)

You want to come with me to punch your pin number into the atm machine? I'm totally dtf to fuck if you are.

load more comments (1 replies)
[–] Geek_King@lemmy.world 3 points 2 years ago* (last edited 2 years ago) (1 children)

Automated Teller Machine Machine, Personal Identification Number Number, Network Interface Card Card

This has been a problem for as long as acronyms have existed (and yes it bothers me too).

load more comments (1 replies)
[–] Lmaydev@programming.dev 6 points 2 years ago* (last edited 2 years ago)

They are 100% AI. It's an umbrella term. Simple pathing algorithms in games are also AI.

[–] MxM111@kbin.social 3 points 2 years ago

Honestly, I see 0 difference. I think you are suggesting that somehow it is more logical to give information to AI for free sounds more reasonable than to LLM (which is absolutely AI). I see no reason at all to believe so. Maybe you can elaborate?

[–] thepineapplejumped@lemm.ee 3 points 2 years ago

I think this was always the case even before these models took off. LLM is more correct but it was always jsut called AI.

[–] Aatube@lemmy.dbzer0.com 2 points 2 years ago

Aren’t these sentences exactly the same

[–] lemmyvore@feddit.nl 2 points 2 years ago

We shouldn’t have to pay for the content we use to train and teach an AI.

If you replace "AI" with "person" it's not true so why would it be for AI?

[–] DrQuint@lemmy.world 1 points 2 years ago

If we're unmasking tech, LLM's right now are also all just Computer Vision models with a lot of more abstraction layers thrown at them. Nothing but fit assessment machine with a ludicrous amount of extra steps.

I am convinced this is all pedantry, and these models are going to become the de facto basis for true AI at some point. It was already weird enough that this type of tech got discovered from the goal of checking if an image has a cat or not.

[–] gaun@kbin.social 1 points 2 years ago (1 children)

What is meant by the term "AI" has definitely shifted overtime. What I would have considered to be an AI back in time is nowadays referred to as an "AGI". So they simply changed the language. LLMs are not really capable of "intelligence" they are just automated statistics. On the other hand what really is intelligence? The output does appear intelligent. Maybe in the end it does not matter how it is generated.

load more comments (1 replies)
load more comments
view more: next ›