this post was submitted on 06 Jul 2024
2 points (100.0% liked)

TechTakes

2065 readers
163 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
top 7 comments
sorted by: hot top controversial new old
[–] antifuchs@awful.systems 1 points 1 year ago

Ah yes, the cia is no stranger to the artifice of intelligence.

[–] skillissuer@discuss.tchncs.de 1 points 1 year ago

damn they got their supply of good idea powder back

hide your defense budget before they start staring at goats again

[–] dgerard@awful.systems 0 points 1 year ago (1 children)

This article is heavy on the hype and my eyes are bleeding trying to abstract out what's actually happening here in reality

[–] conciselyverbose@sh.itjust.works 0 points 1 year ago (1 children)

They use LLMs for what they can actually do, which is bullet point core concepts to a huge volume of information, parse a large volume of information for specific queries that may have needed a tech doing a bunch of variations of a bunch of keywords, before, etc. Provided you have humans overseeing the summaries, have the queries surface the actual full relevant documents, and fallback to a human for failed searches, it can potentially add a useful layer of value.

They're probably also using it for propaganda shit because that's a lot of what intelligence is. And various fake documents and web presences as part of cover identities could (again, with human oversight), probably allow you to produce a lot more volume to build them out.

[–] skillissuer@discuss.tchncs.de 0 points 1 year ago (1 children)

Provided you have humans overseeing the summaries

right, at which point you're just better doing it the right way from the beginning, not to mention such tiny detail as not shoving classified information into sam altman's black box

[–] conciselyverbose@sh.itjust.works 0 points 1 year ago* (last edited 1 year ago) (1 children)

I'm not really arguing the merit, just answering how I'm reading the article.

The systems are airgapped and never exfiltrate information so that shouldn't really be a concern.

Humans are also a potential liability to a classified operation. If you can get the same results with 2 human analysts overseeing/supplementing the work of AI as you would with 2 human analysts overseeing/supplementing 5 junior people, it's worth evaluating. You absolutely should never be blindly trusting an LLM for anything. They're not intelligent. But they can be used as a tool by capable people to increase their effectiveness.

[–] dgerard@awful.systems 1 points 1 year ago

the other thing about text like this is that many of the claims of what they're doing will be completely false because someone will have misunderstood then will try to reconstruct a sensible version