this post was submitted on 17 May 2025
793 points (98.7% liked)
196
17609 readers
867 users here now
Be sure to follow the rule before you head out.
Rule: You must post before you leave.
Other rules
Behavior rules:
- No bigotry (transphobia, racism, etc…)
- No genocide denial
- No support for authoritarian behaviour (incl. Tankies)
- No namecalling
- Accounts from lemmygrad.ml, threads.net, or hexbear.net are held to higher standards
- Other things seen as cleary bad
Posting rules:
- No AI generated content (DALL-E etc…)
- No advertisements
- No gore / violence
- Mutual aid posts are not allowed
NSFW: NSFW content is permitted but it must be tagged and have content warnings. Anything that doesn't adhere to this will be removed. Content warnings should be added like: [penis], [explicit description of sex]. Non-sexualized breasts of any gender are not considered inappropriate and therefore do not need to be blurred/tagged.
If you have any questions, feel free to contact us on our matrix channel or email.
Other 196's:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The problem is, if you used normal compression formats, you would have to decompress them and then recompress them with the GPU supported formats every time you wanted to load an asset. That would either increase load times by a lot, or make streaming in new assets in real time much harder.
There are still other compression schemes which can be used to save space, and not compressing anything is a bad idea, it's not the biggest waste of space but it is a waste.
Is there any way an additional decompression step can be done without increasing load times and latency?
There are a number of compression algorithms that prioritize decompression speed, usually at the expense of higher compression times.
It can actually be quicker to store them compressed because memory and bus bandwidth is often a bottleneck. So instead of the cpu or gpu wasting cycles waiting for data to be moved, some of that movement time is shifted to the processors by using compression. Especially if there are idle cores that could be put on that task.
As for going from one compression format to another, you could store them in the final format (and convert on install if it differs between hardware setups, repeating if another hardware setup is detected).
Though if there's any processing done on the uncompressed data (like generating mipmaps or something), that conversion might not even cost extra because it needs to be decompressed and the new data compressed again anyways.
Though on that note, you'd get faster load times by just storing all of those preprocessed and faster install times by just sticking it all in the install download, so there is still a conflict between optimal load speeds and minimal storage space.