this post was submitted on 08 Dec 2024
459 points (94.4% liked)

Technology

63186 readers
3473 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
459
The GPT Era Is Already Ending (www.theatlantic.com)
submitted 2 months ago* (last edited 2 months ago) by cyrano@lemmy.dbzer0.com to c/technology@lemmy.world
 

If this is the way to superintelligence, it remains a bizarre one. “This is back to a million monkeys typing for a million years generating the works of Shakespeare,” Emily Bender told me. But OpenAI’s technology effectively crunches those years down to seconds. A company blog boasts that an o1 model scored better than most humans on a recent coding test that allowed participants to submit 50 possible solutions to each problem—but only when o1 was allowed 10,000 submissions instead. No human could come up with that many possibilities in a reasonable length of time, which is exactly the point. To OpenAI, unlimited time and resources are an advantage that its hardware-grounded models have over biology. Not even two weeks after the launch of the o1 preview, the start-up presented plans to build data centers that would each require the power generated by approximately five large nuclear reactors, enough for almost 3 million homes.

https://archive.is/xUJMG

you are viewing a single comment's thread
view the rest of the comments
[–] NocturnalMorning@lemmy.world 68 points 2 months ago (20 children)

How is it useful to type millions of solutions out that are wrong to come up with the right one? That only works on a research project when youre searching for patterns. If you are trying to code, it needs to be right the first time every time it's run, especially if it's in a production environment.

[–] Khanzarate@lemmy.world 6 points 2 months ago (11 children)

Well actually there's ways to automate quality assurance.

If a programmer reasonably knew that one of these 10,000 files was the "correct" code, they could pull out quality assurance tests and find that code pretty dang easily, all things considered.

Those tests would eliminate most of the 9,999 wrong ones, and then the QA person could look through the remaining ones by hand. Like a capcha for programming code.

The power usage still makes this a ridiculous solution.

[–] designatedhacker@lemm.ee 33 points 2 months ago

If you first have to write comprehensive unit/integration tests, then have a model spray code at them until it passes, that isn't useful. If you spend that much time writing perfect tests, you've already written probably twice the code of just the solution and reasonable tests.

Also you have an unmaintainable codebase that could be a hairball of different code snippets slapped together with dubious copyright.

Until they hit real AGI this is just fancy auto complete. With the hype they may dissuade a whole generation of software engineers picking a career today. If they don't actually make it to AGI it will take a long time to recover and humans who actually know how to fix AI slop will make bank.

load more comments (10 replies)
load more comments (18 replies)