this post was submitted on 05 Apr 2025
241 points (95.5% liked)

Technology

68400 readers
2452 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] TheFogan@programming.dev 111 points 1 day ago (15 children)

I mean honestly without the theoretical misdirection, I'd find this one of the better examples of a reasonable use of AI within a courtroom. IE it sounds like he asked to represent himself. He presented a video which, to my knowledge all the arguements were written by the person himself. Second the judge asked who it was he said the avitar is AI, presenting his arguements.

So in short, the only thing that's attempted to be bypassed, are biases related to his appearence and speech.

IMO this concept could be the real future of trials if done right. Imagine say if we used say extreme facial tracking AI, hid the defendent's actual appearence, but allowed the defendants to use avitars, that still map out any facial expressions and body language they make during the trial... but actually conceal the defendent's actual race and appearance. We could literally be looking at the one solution to the racial bias... the reality that with the same evidence, race plays a huge part in conviction rate and harshness of sentences.

[–] Zwuzelmaus@feddit.org -1 points 1 day ago (1 children)

the only thing that's attempted to be bypassed, are biases related to his appearence and speech.IMO this concept could be the real future of trials if done right.

How do you know if it is done right or wrong?

It is fake, and it is a manipulative kind of fake.

You assume some honorable purpose, but that isn't the only possible purpose.

Even "bypassing biases" would be a kind of manipulation, and you can never know what other manipulation is going on at the same time. It could exploit other biases. It could try other tricks that we are not evil enough to imagine, and it would be "better" at it than any real human.

[–] TheFogan@programming.dev 4 points 23 hours ago (1 children)

The point is the idea, that in general a system could be applied where... say universally the same avitar is applied to everyone while on trial. The fact is "looking trustworthy", is inherently an unfair advantage, that has no real bearing on actual innocence or guilt of which we know these bias's have helped people that better evidence have resulted in innocent people getting convicted, and guilty people walking.

Theoretically a system in the future in which everyone must use an avitar to prevent these bias's would almost certainly lead to more accurate court trials. Of course the one hurdle in my mind that would render it difficult is how to accurately deal with evidence that requires appearence to asses (IE most importantly eye witness descriptions and video footage). When it comes to DNA, Fingerprints, forensics, and hell the lawyers arguements themselves, there's no question in my mind that perception with no factual use, has serious consiquences that harm any attempt to make an appropriately fair system.

[–] Zwuzelmaus@feddit.org 0 points 21 hours ago (1 children)

say universally the same avitar is applied to everyone while on trial.

The one and only "good" AI. Trustworthy for everybody?

I do not believe in that.

First you would need to decide on the one and only company to provide that AI. Then someone must prove that it is good and only good. Then it must be unhackable (and remain so while technology evolves).

All of this is hardly feasable.

[–] TheFogan@programming.dev 1 points 12 hours ago

Again I think our problem is the concept of what we are calling "AI". IE I'm only talking of basically AI Generated art/avitars. If done in a consistant way I don't think it even quite qualifies as AI. Really just glorified puppetry. There's no "trustworhtyness", because it doesn't deal in facts. It's job is literally just to take a consistant 3D model, and make it move like the defendent moves. It's old tech used in movies etc... for years, and since it's literally dealing in only appearence any "hacks" etc... would be plainly visible to any observers

load more comments (13 replies)