this post was submitted on 05 Apr 2025
241 points (95.5% liked)

Technology

68400 readers
2452 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 24 comments
sorted by: hot top controversial new old
[–] _cryptagion@lemmy.dbzer0.com 5 points 9 hours ago (1 children)

Using AI to be your voice when you have trouble articulating something you want to say has to be one of the best uses of the technology I have seen to date. It makes me wonder what other uses this tech could have, especially for people who are neurodivergent or disabled.

[–] Pyr_Pressure@lemmy.ca 1 points 1 hour ago

Yeah honestly I don't see a problem with this. If it's his own words why does it matter if it's AI speaking or himself? Even if it's not his own words, he could just as easily say the same shit on camera, why does the person in the video needs to be him?

[–] Tungsten5@lemm.ee 13 points 19 hours ago (1 children)

This shit, and later on in the article when it talks about an Arizona court using AI, makes me want to hate AI forever. Fuck this, man

[–] _cryptagion@lemmy.dbzer0.com -3 points 11 hours ago (1 children)

A person has social anxiety, and that makes you hate AI? That sounds pretty ableist.

[–] Tungsten5@lemm.ee 5 points 11 hours ago (1 children)

No. Try to think critically next time. AI has been mostly garbage which is why I dislike it. It should not be used in court. Did you even read the article? If so, re-read it

[–] _cryptagion@lemmy.dbzer0.com 1 points 9 hours ago

Yes, I did read it. I'm guessing you did not. if you had, you would know why he used the AI video. This was a very constructive use of AI that not only didn't hurt anyone, but helped someone with a problem.

[–] TheFogan@programming.dev 111 points 1 day ago (7 children)

I mean honestly without the theoretical misdirection, I'd find this one of the better examples of a reasonable use of AI within a courtroom. IE it sounds like he asked to represent himself. He presented a video which, to my knowledge all the arguements were written by the person himself. Second the judge asked who it was he said the avitar is AI, presenting his arguements.

So in short, the only thing that's attempted to be bypassed, are biases related to his appearence and speech.

IMO this concept could be the real future of trials if done right. Imagine say if we used say extreme facial tracking AI, hid the defendent's actual appearence, but allowed the defendants to use avitars, that still map out any facial expressions and body language they make during the trial... but actually conceal the defendent's actual race and appearance. We could literally be looking at the one solution to the racial bias... the reality that with the same evidence, race plays a huge part in conviction rate and harshness of sentences.

[–] TeamAssimilation@infosec.pub 10 points 23 hours ago

Guess they will start teaching vtubing in high school then.

[–] madame_gaymes@programming.dev 21 points 1 day ago* (last edited 1 day ago)

It's a really interesting thought, and under ideal circumstances would work IMO. Obviously things are never ideal and there would be all sorts of roadblocks and gotchas as something like this was developed. Things we could think of now, and other things we probably couldn't. Not to mention the whole problem of, "who develops it and how much trust can you give them?"

As I was reading the idea, it made me think of the suits from A Scanner Darkly that the undercover narcs wore. Basically heavily obfuscated the voice and displayed always-changing patchwork human features to anyone observing from the outside, including trying to hide body shape. Something like that could get similar results. Obviously a video filter would be much easier to develop than a sci-fi suit, but still.

A Scanner Darkly movie representation of the suit

[–] DragonTypeWyvern@midwest.social 10 points 1 day ago (2 children)

I think the major problem here would be that all the avatars would be pretty white women if you wanted to really game the system.

Or black if they're accused of a hate crime, or whatever.

That just seems... Weird.

[–] Pyr_Pressure@lemmy.ca 1 points 1 hour ago

Why is that a problem? It gets rid of bias and would actually help minorities defeat bias in the court and get a more fair judgement.

[–] reksas@sopuli.xyz 6 points 18 hours ago* (last edited 18 hours ago)

just have couple of standardised avatars. It would be madness if everyone could choose whatever.

Not that AI is the most effective representation or that it should replace public defenders, but this doesn't seem far off from scolding a defendant for using Google to research his arguments.

[–] Atherel@lemmy.dbzer0.com 15 points 1 day ago (1 children)

Why even keep facial expressions? People who are good at acting can abuse it by mimicking what's expected from them and for people with e.g. autism who have problems with body language it can backfire hardly. Let facts and evidence be the base for a sentence.

[–] TheFogan@programming.dev 5 points 1 day ago

true, though at that point an avatar itself is unnecessary. Maybe that should be the standard, just change procedure to not ever bring the defendant into the court room.

Admitted I do suppose the biggest problem with the hypothetical goal of hide the defendant in the court room, is that some of the evidence is going to obviously require what the defendant looks like (Eye witness testimony, video surveillance clips etc...).

I do agree with the general gist though, if we could run courts without ever showing the appearance or even names of the people involved, it would be the ideal system to eliminate bias's

[–] wildncrazyguy138@fedia.io 4 points 1 day ago (2 children)

Agreed, if AI can pass the bar AND the defendant’s right to a public attorney is unavailable due to resource and time constraints, then this is a whole lot better than the plea deals that some defenders are being coerced to sign without a public defender.

And let’s not kid ourselves. Most of the existing public defenders are probably using AI to support their case nowadays anyway.

[–] TheFogan@programming.dev 11 points 1 day ago

again though missing the point, to my knowledge at least in the article, I don't see anything to imply the arguements were AI. At least it sounds like the person is claiming the AI was only used for the face and voice.

So on the whole, it just sounds like he wrote the script himself. The AI doesn't need to pass the bar in this example. because the AI is just a glorified costume. You don't have to pass the bar to represent yourself, and at least with the information presented in this arguement, the AI did not create any of the arguements, only read a script written by the person.

[–] bugg@lemm.ee 5 points 1 day ago

Or they could pay public defenders a fair wage and hire more. The reason they don’t is because they don’t want people to have a fair trial. You’re constitutionally ineffective the second you get hired as a PD. We have the resources but many on the far right want to dismantle the requirement for representation and overturn Gideon.

AI isn’t the solution to this—proper governance is.

[–] Zwuzelmaus@feddit.org -1 points 1 day ago (1 children)

the only thing that's attempted to be bypassed, are biases related to his appearence and speech.IMO this concept could be the real future of trials if done right.

How do you know if it is done right or wrong?

It is fake, and it is a manipulative kind of fake.

You assume some honorable purpose, but that isn't the only possible purpose.

Even "bypassing biases" would be a kind of manipulation, and you can never know what other manipulation is going on at the same time. It could exploit other biases. It could try other tricks that we are not evil enough to imagine, and it would be "better" at it than any real human.

[–] TheFogan@programming.dev 4 points 23 hours ago (1 children)

The point is the idea, that in general a system could be applied where... say universally the same avitar is applied to everyone while on trial. The fact is "looking trustworthy", is inherently an unfair advantage, that has no real bearing on actual innocence or guilt of which we know these bias's have helped people that better evidence have resulted in innocent people getting convicted, and guilty people walking.

Theoretically a system in the future in which everyone must use an avitar to prevent these bias's would almost certainly lead to more accurate court trials. Of course the one hurdle in my mind that would render it difficult is how to accurately deal with evidence that requires appearence to asses (IE most importantly eye witness descriptions and video footage). When it comes to DNA, Fingerprints, forensics, and hell the lawyers arguements themselves, there's no question in my mind that perception with no factual use, has serious consiquences that harm any attempt to make an appropriately fair system.

[–] Zwuzelmaus@feddit.org 0 points 22 hours ago (1 children)

say universally the same avitar is applied to everyone while on trial.

The one and only "good" AI. Trustworthy for everybody?

I do not believe in that.

First you would need to decide on the one and only company to provide that AI. Then someone must prove that it is good and only good. Then it must be unhackable (and remain so while technology evolves).

All of this is hardly feasable.

[–] TheFogan@programming.dev 1 points 12 hours ago

Again I think our problem is the concept of what we are calling "AI". IE I'm only talking of basically AI Generated art/avitars. If done in a consistant way I don't think it even quite qualifies as AI. Really just glorified puppetry. There's no "trustworhtyness", because it doesn't deal in facts. It's job is literally just to take a consistant 3D model, and make it move like the defendent moves. It's old tech used in movies etc... for years, and since it's literally dealing in only appearence any "hacks" etc... would be plainly visible to any observers

[–] besselj@lemmy.ca 36 points 1 day ago (1 children)

Can't wait to see what silliness ensues when "sovereign citizens" start using AI avatars to represent themselves too

[–] Infinite@lemmy.zip 6 points 22 hours ago

The video will need to be at a 45⁰ angle and misuse a lot of Latin.