this post was submitted on 27 Apr 2025
506 points (87.5% liked)

Technology

69545 readers
3422 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] CeeBee_Eh@lemmy.world 1 points 2 days ago (1 children)

Nevermind the why (I'm not entirely convinced it's being done), I want to know what exactly would be seen in network traffic.

Ok, you said "voice collection" which I'll assume is audio recording and then uploading to some server. That's an astonishingly bonkers and inefficient way of doing it. You run a very small model (using something like Tflite) that's trained against a few hundred keyboards (brand names, products, or product category) and run it on the background of your service. Phones already do essentially this with assistant activation listening. Then once a few hours of listening, compress the plain text detection data (10 MB of plain text can be compressed to 1 MB) and then just upload the end result. And we wouldn't be talking about megabytes, we'd be talking single digits kilobytes. An amount that wouldn't even be a blip on wireshark, especially since phones are so exceedingly chatty nowadays. Have you actually tried to wireshark phone traffic? It's just constant noise.

It's entirely possible to do. But that doesn't mean that it is being done.

[–] LoveSausage@discuss.tchncs.de 1 points 2 days ago* (last edited 2 days ago) (1 children)

It would cost trillions and half the battery life. Just because you dont understand something doesnt make you right. Your entire argument is shattered in the link I provided you earlier. Its not a few kb needed and if done locally a huge battery eater. Not to mention that the cost to have any use of it would exceed the entire value of the admarket.

there are plenty of people that can find shit in the noise on wireshark if there was anything like what you are suggesting.

Also there is a teapot in orbit around jupiter. Prove me wrong.

[–] CeeBee_Eh@lemmy.world 1 points 2 days ago (1 children)

Just because you dont understand

Lol. My dude, I'm a developer who specializes in AI.

It would cost trillions

I have no clue how you came to that number. I could (and partially have) whipped up a prototype in a few days.

half the battery life

Hardly. Does Google assistant half battery life? No, so why would this? Besides, you would just need to listen to the mic and record audio only if the sound is above a certain volume threshold. Then once every few hours batch process the audio. Then send the resulting text data (in the KBs) up to a server.

The average ad data that's downloaded for in-app display is orders of magnitude larger than what would be uploaded.

there are plenty of people that can find shit in the noise on wireshark

How are they going to see data that's encrypted and bundled with other innocuous data?

[–] LoveSausage@discuss.tchncs.de 1 points 2 days ago (1 children)

Litarally all your questions are answered in the link i pointed out twice now. Try it. Hey google doesnt take much 1k wake words a lot more.. your math doesnt add up anywhere close to reality.

[–] CeeBee_Eh@lemmy.world 1 points 2 days ago (1 children)

I don't have any questions. This is something I know a lot about at a very technical level.

The difference between one wake word and one thousand is marginal at most. At the hardware level the mic is still listening non-stop, and the audio is still being processed. It *has" to do that otherwise it wouldn't be able to look for even one word. And then from there it doesn't matter if it's one word or 10k. It's still processing the audio data through a model.

And that's the key part, it doesn't matter if the model has one output or thousands, the data still bounces through each layer of the network. The processing requirements are exactly the same (assuming the exact same model).

This is the part you simply do not understand.

[–] LoveSausage@discuss.tchncs.de 1 points 2 days ago* (last edited 2 days ago) (1 children)

Seems you don't, and started your line with a question and continued to do so despite being provided with answers repeatedly . Is there some kink of roleplaying AI dev? You don't really seem to have done your homework to do so.

Despite what some believe, keyword detection like “Hey Google” is only used to wake up a device from a low power state to perform more powerful listening, it’s not helpful for data tracking. Increasing the number of keywords to thousands or more (which you would need to cover the range of possible ad topics) requires more processing power and therefore defeats the purpose. Your battery would drain very noticeably if your phone was always listening for thousands of possible words.

[–] CeeBee_Eh@lemmy.world 1 points 2 days ago* (last edited 2 days ago) (1 children)

keyword detection like “Hey Google” is only used to wake up a device from a low power state to perform more powerful listening

That's more applicable for something like a Google Mini. A phone is powerful enough, especially with the NPU most phones have now, to perform those detecting efficiently without stepping up the CPU state.

Is there some kink of roleplaying AI dev?

Is there some kink on your side in pretending you're smart? You have no idea who I am or what I know.

Increasing the number of keywords to thousands or more (which you would need to cover the range of possible ad topics) requires more processing power

Again, you're showing your lack of knowledge here. A model doesn't use more power if trained on one class or a hundred. The amount of cycles is the same in both instances.

It's usually smart speakers that have a low powered chip that processes the wake word and fires up a more powerful chip. That doesn't exist in phones.

Edit: just to hammer home a point. Your example of "hey Google" simply waking up the device for more complex processing just proves my point. The scenario we're talking about is the same as the wake word. We're not looking to do any kind of complex processing. We're just counting the number of times a word is triggered. That's it. No reasoning out the meaning, no performing actions, no understanding of a question and then performing a search to provide a response. It's literally a "wake-word" counter.

[–] LoveSausage@discuss.tchncs.de 1 points 2 days ago (1 children)

No you are wrong. Seems your making things up on the go. More wake words to listen to more battery drain. Fact.

But sure lets play. Now that you used your "wake word counter" what use would that have ? You have ZERO context then. Completely useless .

[–] CeeBee_Eh@lemmy.world 1 points 2 days ago (1 children)

No you are wrong

Lol. "Nuh-uh" doesn't work with me.

https://stackoverflow.com/questions/64008486/effects-of-number-of-classes-on-inference-time-in-object-detection-api

Seems your making things up on the go

I speak from knowledge and experience. What do you bring to the table?

More wake words to listen to more battery drain. Fact.

1 trained class = 1 model

100 trained classes = 1 model

Tell me how running 1 model would drain more battery than running 1 model? I'll wait...

You have ZERO context then. Completely useless .

The person said "NIKE" a few times, show them ads for shoes. The person said "mechanic" "car" "fixed" around the same time, show them ads for local car repair shops.

You don't need the full context of what was said to get some context from just the words. The spacing in time and the revelations relationship between words can give you a whole lot of context. Plenty to target ads.

Now, either come back with something real, or go away and conceed you're out of your depth.

[–] LoveSausage@discuss.tchncs.de 1 points 13 hours ago (1 children)

I already did multiple times you just refuse to read it like the little bitch troll you are.. context matter a lot you are wrong. How often do you say Nike ? More interesting would be "I will buy a pair of new shoes" now shoes can be mentioned in tons of context so you better have a way of separate it.

[–] CeeBee_Eh@lemmy.world 1 points 11 hours ago (1 children)

I already did multiple times

No you didn't, because you keep saying wrong things.

you just refuse to read it

I don't need to read it, because I read it when it came out... back in 2008. I read their stuff regularly. I also read all the other stuff about this topic (AI tech). An article from 2008 is irrelevant at this point. Technology has advanced leaps and bounds in 17 years. AI wasn't even a thing back then. Things like Picovoice didn't even exist until recently.

It also says a lot that your source of truth is a near 20-year old article from Android Authority.

How often do you say Nike ?

Personally? Never.

More interesting would be “I will buy a pair of new shoes” now shoes can be mentioned in tons of context so you better have a way of separate it.

I don't know about "interesting", but I do agree that it would be much greater context to better target ads. But that's not what the discussion was about. I said way back that I'm not positioning this idea of phone's listening as an absolute certainty. My whole point was that at a technological level it's well within technical means to accomplish the whole "our phones listen to what we say" all while not draining the battery enough to be outright noticeable.

Another thing to note, is that most (if not all) of the anecdotal stories about people talking about a topic and then seeing ads about that thing are often generic conversations. Even in my own tests, which are anecdotal, confirm that. I never talk about boating. I never search anything about boats. I also never saw any ads about boats. Etc. So I did a little test on my own recently and openly talked about "getting the boat ready", "can't wait to go boating next week", "need to get the boat in the water and ready for the season", and so on. I did this for about an hour solid. Then waited and hour and visited some generic websites that show ads, and lo and behold there were lots of ads for buying a new propeller, ads for nearby marinas, ads for marina supply shops, ads for boating accessories, and so on.

Like I said, it's entirely anecdotal and in no way conclusive, but it does lead me to believe that there might be truth to the rumours. And it's the kind of thing I've heard from many other technical people who deliberately tried to trigger ads on topics they never deal with otherwise.

And also like I said before either come back with something real, or go away and concede you’re out of your depth.

[–] LoveSausage@discuss.tchncs.de 1 points 7 hours ago (1 children)

You are the one basing your argument on an article from 2008 , not me. You are completely deranged. Now come back with some hard evidence or go shout at the hollow moon in your tinfoil hat.

Your anecdotes are as good as a horoscope. Get real.

[–] CeeBee_Eh@lemmy.world 1 points 6 hours ago

You are the one basing your argument on an article from 2008 , not me.

..... what? You literally linked the article from Android Authority, not me.

You are completely deranged.

Says the person claiming a model's computational power usage scales with the number of classes trained.

Now come back with some hard evidence

Hard evidence for what? I've never once claimed phones are listening to people's conversations. This whole thread has been about the technical viability of such a system. Not evidence of it's literal existence.

You, on the other hand, have spewed nonsense this whole time.

So like I've said more than once, come back with something real or stay in your lane.