GamingChairModel

joined 2 years ago

(the preview fetch is not e2ee afaik)

Technically, it is, but end to end encryption only covers the data between the ends, and not what one of the ends chooses to do with it. If one end of the conversation chooses to log the conversation in an insecure way, the conversation itself might technically be encrypted, but the contents of the conversation can be learned by another. Or if one end simply chooses to forward a message to a new party not part of the original conversation.

The link previews are happening outside of the conversation, and that action can be seen by people like the owner of the website, your ISP, and maybe WhatsApp itself (if configured in that way, not sure if it does).

So end to end isn't a panacea. You have to understand how it fits into the broader context of security and threat models.

[–] GamingChairModel@lemmy.world 12 points 2 weeks ago (1 children)

Can humans actually do it, though? Are humans actually capable of driving a car reasonably well using only visual data, or are we actually using an entire suite of sensors in our heads and bodies to understand our speed and orientation, road conditions, and our surroundings? Driving a car by video link is considerably harder than just driving a car normally, from within a car.

And even so, computers have a long way to go before they catch up with our visual processing. Our visual cortex does a lot of error correction of visual data, using proprioceptive sensors in our heads that silently and seamlessly delete the visual smudges and smears of motion as our heads move. The error correction adjusts quickly to recalibrate things when looking at stuff under water or anything with a different refractive index, or when looking at reflections in a mirror.

And we maintain that flow of visual data by correcting for motion and stabilizing the movement of our eyes to compensate from external motion. Maybe not as good as chickens, but we're pretty good at it. We recognize faulty sensor data and correct for it by moving our heads around obstructions, of silently ignoring something that is just blocking one eye, of blinking or rubbing our eyes when tears or water make it hard to focus. We also know when to not trust our eyes (in the dark, in fog, when temporarily blinded by lights), and fall back to other methods of understand the world around us.

Throw in our sense of balance in our inner ears, our ability to direction find on sounds, and the ability to process vibrations in our seat and tactile feedback on a steering wheel, the proprioception of feeling forces on our body or specific limbs, and we have an entire system that uses much more than visual data to make decisions and model the world around us.

There's no reason why an artificial system needs to use exactly the same type of sensors as humans or other mammals do. And we have preexisting models and memories of what is or was around us, like when we walk around our own homes in the dark. But my point is that we rely on much more than our eyes, processed through an image processing system far more complex than the current state of AI vision. Why hold back on using as much sensor data as possible, to build a system that has good, reliable sensor data of what is on the road?

[–] GamingChairModel@lemmy.world 3 points 2 weeks ago

But the big one here is the characteristic word. By adding Fenyx Rising, it could be argued that that, in addition to the material differences between the products, there is enough separation to ensure there is no risk of confusion from audiences. There are also multiple Immortals trademarks which could make that word in and of itself less defensible depending on the potential conflict.

That's basically it right there. The word "immortal" has multiple dictionary definitions tracing back long before any trademark, including a prominent ancient military unit so any trademark around that word isn't strong enough to prevent any use of the word as a normal word, or even as part of another trademark when used descriptively.

The strongest trademark protection comes for words that are totally made up for the purpose of the product or company. Something like Hulu or Kodak.

Next up are probably mashed up words that might relate to existing words but are distinct mashups or modifications, like GeForce or Craisins.

Next up, words that have meaning but are completely unrelated to the product itself, like Apple (computers) and Snickers (the candy bar) or Tide (the laundry detergent).

Next up are suggestive marks where the trademark relies on the meaning to convey something about the product itself, but still retains some distinctiveness: InSinkErator is a brand of in-sink disposal, Coffee Mate is a non-dairy creamer designed for mixing into coffee, Joy-Con is a controller designed to evoke joy, etc.

Some descriptive words don't get trademark protection until they enter the public consciousness as a distinct indicator of its origin or manufacture. Name-based businesses often fall into this category, like a restaurant named after the owner, and don't get protection until it's popular enough (McDonald's is the main example).

It can get complicated, but the basic principle underlying all of it is that if you choose a less unique word as the name of your trademark, you'll get less protection against others using it.

[–] GamingChairModel@lemmy.world 57 points 3 weeks ago (1 children)

"Whistleblows" as if he's some kind of NVIDIA insider.

[–] GamingChairModel@lemmy.world 3 points 3 weeks ago (1 children)

I know way too many people who only used Twitter for sports. But after /r/nfl and /r/nba and all the sports subreddits blocked Twitter, I think a lot of the organic sports activity will die off.

[–] GamingChairModel@lemmy.world 10 points 3 weeks ago

Loops really isn't ready for primetime. It's too new and unpolished, and will need a bit more time.

I wonder if peertube can scale. YouTube has a whole sophisticated system for ingesting and transcoding videos into dozens of formats, with tradeoffs being made on computational complexity versus file size/bandwidth, which requires some projection on which videos will be downloaded the most times in the future (and by which types of clients, with support for which codecs, etc.). Doing this can require a lot of networking/computing/memory/storage resources, and I wonder if the software can scale.

[–] GamingChairModel@lemmy.world 24 points 3 weeks ago (3 children)

Networking standards started picking winners during the PC revolution of the 80's and 90's. Ethernet, with the first standards announced in 1983, ended up beating out pretty much other LAN standard at the physical layer (physical plugs, voltages and other ways of indicating signals) and the data link layer (the structure of a MAC address or an Ethernet frame). And this series of standards been improved many times over, with meta standards about how to deal with so many generations of standards through autonegotiation and backwards compatibility.

We generally expect Ethernet to just work, at the highest speeds the hardware is capable of supporting.

[–] GamingChairModel@lemmy.world 3 points 1 month ago

The multiple lenses in a traditional professional photograph setup are stacked in front of each other, so they stick out a lot. The multiple cameras on a back of a phone are a workaround for trying to get good image quality and versatile zoom without making the lens stick out too far.

[–] GamingChairModel@lemmy.world 6 points 1 month ago (1 children)

Each physical lens has a single focal length. "Wide" lenses have a wide field of view, and "telephoto" lenses can make very far away things look big in the screen. Have you ever tried to take a photo of the moon with your regular cell phone camera at default zoom? The moon itself is tiny, because the angular diameter of the moon from the surface of the earth is only about half of a degree (out of a 360 degree circle). So you need a very high focal length lens to be able to get the moon to fill up a photograph. Often, in sports, the sidelines have photographers with huge lenses trying to capture intricate detail (beads of sweat, texture of a ball) from 50-100 meters away.

You can stack multiple lenses in front of each other and vary the difference between them to "zoom" to different focal lengths. That versatility is great, and zoom lenses are very common on cameras. But because this feature requires the stacking of multiple lenses, the lens assembly as a whole will end up sticking out pretty far. Bad form factor for a phone.

So cell phones use a bunch of single-lens cameras to make the lens protrude less from the body of the phone, and use software to choose between the cameras: wide, medium, telephoto, or maybe even a super telephoto.

And once they had that in place, there were a few tricks that could be used where the software would evaluate 2 or more cameras simultaneously to try to capture more information with less blur to fill in more image detail than one camera could have, with that sensor hardware. So there are a bunch of computational photography tricks that make cell phone cameras look better with small, limited hardware.

[–] GamingChairModel@lemmy.world 2 points 1 month ago (1 children)

It's still the same issue, RAID or Ceph. If a physical drive can only write 100 MB/s, a 36TB drive will take 360,000 seconds (6000 minutes or 100 hours) to write. During the 100-hour window, you'll be down a drive, and be vulnerable to a second failure. Both RAID and Ceph can be configured for more redundancy at the cost of less storage capacity, but even Ceph fails (down to read only mode, or data loss) if too many physical drives fail.

[–] GamingChairModel@lemmy.world 5 points 1 month ago

all the quadratic communication and caching growth it requires.

I have trouble visualizing and understanding how the Internet works at scale, but can generally grasp how page-by-page or resource-by-resource requests work. I struggle to understand how one could efficiently parse the firehose of activity coming from every user on every instance that your own users follow, at least in user-focused services like Mastodon (or Twitter or Bluesky). With Lemmy, there will be many more people following the biggest communities with the most activity, so caching naturally scales. But with Twitter-like follows of individual accounts, there are going to be a lot of accounts on the long tail, with lots of different accounts being followed only by a few people. The most efficient method is to just ignore the small accounts, but obviously that ends up affecting a large number of accounts. But on the other hand, keeping up with the many small accounts will end up occupying all the resources on stuff very few people want to see.

A centralized service has to struggle with this as well, but might have better control over caching and other on-demand retrieval of content in lower demand, without inadvertently DDoSing someone else's server.

 

Curious what everyone else is doing with all the files that are generated by photography as a hobby/interest/profession. What's your working setup, how do you share with others, and how are you backing things up?

view more: next ›