this post was submitted on 22 May 2025
297 points (94.6% liked)

Programming

20458 readers
54 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] atzanteol@sh.itjust.works 44 points 1 week ago (5 children)

Have you used AI to code? You don't say "hey, write this file" and then commit it as "AI Bot 123 aibot@company.com".

You start writing a method and get auto-completes that are sometimes helpful. Or you ask the bot to write out an algorithm. Or to copy something and modify it 30 times.

You're not exactly keeping track of everything the bots did.

[–] eager_eagle@lemmy.world 54 points 1 week ago (14 children)

yeah, that's... one of the points in the article

load more comments (14 replies)
[–] zqwzzle@lemmy.ca 8 points 1 week ago

We could see how the copilot PRs went:

[–] Corngood@lemmy.ml 7 points 1 week ago (5 children)

Or to copy something and modify it 30 times.

This seems like a very bad idea. I think we just need more lisp and less AI.

load more comments (5 replies)
load more comments (2 replies)
[–] oakey66@lemmy.world 39 points 1 week ago (2 children)

It’s not good because it has no context on what is correct or not. It’s constantly making up functions that don’t exist or attributing functions to packages that don’t exist. It’s often sloppy in its responses because the source code it parrots is some amalgamation of good coding and terrible coding. If you are using this for your production projects, you will likely not be knowledgeable when it breaks, it’ll likely have security flaws, and will likely have errors in it.

[–] tisktisk@piefed.social 6 points 1 week ago

So you're saying I've got a shot?

load more comments (1 replies)
[–] Blue_Morpho@lemmy.world 23 points 1 week ago (3 children)

If humans are so good at coding, how come there are 8100000000 people and only 1500 are able to contribute to the Linux kernel?

I hypothesize that AI has average human coding skills.

[–] GiorgioPerlasca@lemmy.ml 20 points 1 week ago (3 children)

Average drunk human coding skils

load more comments (3 replies)
load more comments (2 replies)
[–] LeFantome@programming.dev 20 points 1 week ago (3 children)

Can Open Source defend against copyright claims for AI contributions?

If I submit code to ReactOS that was trained on leaked Microsoft Windows code, what are the legal implications?

[–] proton_lynx@lemmy.world 18 points 1 week ago* (last edited 1 week ago) (1 children)

what are the legal implications?

It would be so fucking nice if we could use AI to bypass copyright claims.

[–] piccolo@sh.itjust.works 8 points 1 week ago

"No officer, i did not write this code. I trained AI on copyright material and it wrote the code. So im innocent"

load more comments (2 replies)
[–] notannpc@lemmy.world 17 points 1 week ago (1 children)

AI is at its most useful in the early stages of a project. Imagine coming to the fucking ssh project with AI slop thinking it has anything of value to add 😂

[–] HaraldvonBlauzahn@feddit.org 30 points 1 week ago* (last edited 1 week ago) (9 children)

The early stages of a project is exactly where you should really think hard and long about what exactly you do want to achieve, what qualities you want the software to have, what are the detailed requirements, how you test them, and how the UI should look like. And from that, you derive the architecture.

AI is fucking useless at all of that.

In all complex planned activities, laying the right groundwork and foundations is essential for success. Software engineering is no different. You won't order a bricklayer apprentice to draw the plan for a new house.

And if your difficulty is in lacking detailed knowledge of a programming language, it might be - depending on the case ! - the best approach to write a first prototype in a language you know well, so that your head is free to think about the concerns listed in paragraph 1.

load more comments (9 replies)
[–] TempermentalAnomaly@lemmy.world 17 points 1 week ago (3 children)

I am not a programmer and I think it's silly to think that AI will replace developers.

But I was working through a math problem in Moscow Puzzles with my kiddo.

We had solved it, but I wasn't sure he got it at a deep level. So I figured I'd do something in Excel or maybe just do cut outs. But I figured I'd try to find a web app that would do this better. Nothing really came up that was a good match. But then thought, let's see how bad AI programming can be. I'd fought with it over some excel functions and it's been mainly useful in pointing me in the right direction, but only occasionally getting me over the finish line.

After about 6 to 8 hours of work, a little debugging, havinf teach and quiz me occasionally, and some real frustration of pointing out that the feature previously changed and re-emeged, I eventually had something that worked.

The Shooting Range Simulator is a web-based application designed to help users solve a logic puzzle involving scoring points by placing blocks on vertical number lines.

A buddy developer friend of mine said: "I took a quick scroll through the code. Looks pretty clean, but I didn't dive in enough to really understand it. Definitely all that css BS would take me ages to do without AI."

I don't take credit for this and don't pretend that this was my work, but I know my kiddo is excited to try the tool. I hope he learns from it and we bond over a math problem.

I know that everyone is worried about this tool, but moments like those are not nothing. Personally, I'm a Luddite and think the new tools should be deployed by the people's livelihood it will effect and not the business owners.

[–] bignose@programming.dev 9 points 1 week ago

Personally, I’m a Luddite and think the new tools should be deployed by the people’s livelihood it will effect and not the business owners.

Thank you for correctly describing what a Luddite wants and does not want.

load more comments (2 replies)
[–] glitchdx@lemmy.world 15 points 1 week ago

If AI was good at coding, my game would be done by now.

[–] conditional_soup@lemm.ee 14 points 1 week ago

FTA: The user considered it was the unpaid volunteer coders’ “job” to take his AI submissions seriously. He even filed a code of conduct complaint with the project against the developers. This was not upheld. So he proclaimed the project corrupt. [GitHub; Seylaw, archive]

This is an actual comment that this user left on another project: [GitLab]

As a non-programmer, I have zero understanding of the code and the analysis and fully rely on AI and even reviewed that AI analysis with a different AI to get the best possible solution (which was not good enough in this case).

[–] francois@sh.itjust.works 13 points 1 week ago

Microsoft has set up copilot to make contributions for the dotnet runtime https://github.com/dotnet/runtime/pull/115762 I'm sure maintainers spends more time to review and interact with copilot than it would have to write it themselves

[–] andybytes@programming.dev 12 points 1 week ago

My theory is not a lot of people like this AI crap. They just lean into it for the fear of being left behind. Now you all think it's just gonna fail and it's gonna go bankrupt. But a lot of ideas in America are subsidized. And they don't work well, but they still go forward. It'll be you, the taxpayer, that will be funding these stupid ideas that don't work, that are hostile to our very well-being.

[–] teije9@lemmy.blahaj.zone 11 points 1 week ago (1 children)

who makes a contribution made by aibot514. noone. people use ai for open source contributions, but more in a 'fix this bug' way not in a fully automated contribution under the name ai123 way

[–] lemmyng@lemmy.ca 41 points 1 week ago (2 children)

Counter-argument: If AI code was good, the owners would create official accounts to create contributions to open source, because they would be openly demonstrating how well it does. Instead all we have is Microsoft employees being forced to use and fight with Copilot on GitHub, publicly demonstrating how terrible AI is at writing code unsupervised.

[–] Lucien@mander.xyz 14 points 1 week ago
load more comments (1 replies)
[–] Prime@lemmy.sdf.org 10 points 1 week ago (3 children)

Microsoft is doing this today. I can't link it because I'm on mobile. It is in dotnet. It is not going well :)

load more comments (3 replies)
[–] thingsiplay@beehaw.org 9 points 1 week ago (3 children)

Mostly closed source, because open source rarely accepts them as they are often just slop. Just assuming stuff here, I have no data.

[–] hemko@lemmy.dbzer0.com 11 points 1 week ago (2 children)

To be fair if a competent dev used an ai "auto complete" tool to write their code, I'm not sure it'd be possible to detect those parts as an ai code.

I generally dislike those corporate AI tools but gave a try for copilot when writing some terraform script and it actually had good suggestions as much as bad ones. However if I didn't know that well the language and the resources I was deploying, it'd probably have led me to deep hole trying to fix the mess after blindly accepting every suggestion

[–] HaraldvonBlauzahn@feddit.org 7 points 1 week ago* (last edited 1 week ago) (1 children)

People seem to think that the development speed of any larger and more complex software depends on the speed the wizards can type in code.

Spoiler: This is not the case. Even if a project is a mere 50000 lines long, one is the solo developer, and one has a pretty good or even expert domain knowledge, one spends the mayor part of the time thinking, perhaps looking up documentation, or talking with people, and the key on the keyboard which is most used doesn't need a Dvorak layout, bevause it is the "delete" key. In fact, you don't need yo know touch-typing to be a good programmer, what you need is to think clearly and logically and be able to weight many different options by a variety of complex goals.

Which LLMs can't.

load more comments (1 replies)
[–] thingsiplay@beehaw.org 7 points 1 week ago (1 children)

They do more than just autocomplete, even in autocomplete mode. These Ai tools suggest entire code blocks and logic and fill in multiple lines, compared to a standard autocomplete. And to use it as a standard autocomplete tool, no Ai is needed. Using it like that wouldn't be bad anyway, so I have nothing against it.

The problems arise when the Ai takes away the thinking and brain functionality of the actual programmer. Plus you as a user get used to it and basically "addicted". Independent thinking and programming without Ai will become harder and harder, if you use it for everything.

load more comments (1 replies)
[–] magic_lobster_party@fedia.io 10 points 1 week ago

Creator of curl just made a rant about users submitting AI slop vulnerability reports. It has gotten so bad they will reject any report they deem AI slop.

So there’s some data.

[–] joyjoy@lemm.ee 10 points 1 week ago

And when they contribute to existing projects, their code quality is so bad, they get banned from creating more PRs.

[–] 30p87@feddit.org 8 points 1 week ago

Ask Daniel Stenberg.

[–] andybytes@programming.dev 7 points 1 week ago (1 children)

AI is just the lack of privacy, Authoritarian Dragnet, remote control over others computers, web scraping, The complete destruction of America's art scene, The stupidfication of America and copyright infringement with a sprinkling of baby death.

load more comments (1 replies)
[–] HobbitFoot@thelemmy.club 5 points 1 week ago (11 children)

As a dumb question from someone who doesn't code, what if closed source organizations have different needs than open source projects?

Open source projects seem to hinge a lot more on incremental improvements and change only for the benefit of users. In contrast, closed source organizations seem to use code more to quickly develop a new product or change that justifies money. Maybe closed source organizations are more willing to accept slop code that is bad but can barely work versus open source which won't?

[–] dgerard@awful.systems 14 points 1 week ago (1 children)

Baldur Bjarnason (who hates AI slop) has posited precisely this:

My current theory is that the main difference between open source and closed source when it comes to the adoption of “AI” tools is that open source projects generally have to ship working code, whereas closed source only needs to ship code that runs.

load more comments (1 replies)
[–] bignose@programming.dev 7 points 1 week ago* (last edited 1 week ago)

Maybe closed source organizations are more willing to accept slop code that is bad but can barely work versus open source which won’t?

Because most software is internal to the organisation (therefore closed by definition) and never gets compared or used outside that organisation: Yes, I think that when that software barely works, it is taken as good enough and there's no incentive to put more effort to improve it.

My past year (and more) of programming business-internal applications have been characterised by upper management imperatives to “use Generative AI, and we expect that to make you nerd faster” without any effort spent to figure out whether there is any net improvement in the result.

Certainly there's no effort spent to determine whether it's a net drain on our time and on the quality of the result. Which everyone on our teams can see is the case. But we are pressured to continue using it anyway.

[–] MajorasMaskForever@lemmy.world 6 points 1 week ago* (last edited 1 week ago) (1 children)

I'd argue the two aren't as different as you make them out to be. Both types of projects want a functional codebase, both have limited developer resources (communities need volunteers, business have a budget limit), and both can benefit greatly from the development process being sped up. Many development practices that are industry standard today started in the open source world (style guides and version control strategy to name two heavy hitters) and there's been some bleed through from the other direction as well (tool juggernauts like Atlassian having new open source alternatives made directly in response)

No project is immune to bad code, there's even a lot of bad code out there that was believed to be good at the time, it mostly worked, in retrospect we learn how bad it is, but no one wanted to fix it.

The end goals and proposes are for sure different between community passion projects and corporate financial driven projects. But the way you get there is more or less the same, and that's the crux of the articles argument: Historically open source and closed source have done the same thing, so why is this one tool usage so wildly different?

load more comments (1 replies)
[–] HaraldvonBlauzahn@feddit.org 5 points 1 week ago (1 children)

When did you last time decide to buy a car that barely drives?

And another thing, there are some tech companies that operate very short-term, like typical social media start-ups of which about 95% go bust within two years. But a lot of computing is very long term with code bases that are developed over many years.

The world only needs so many shopping list apps - and there exist enough of them that writing one is not profitable.

load more comments (1 replies)
load more comments (7 replies)
load more comments
view more: next ›