sugar_in_your_tea

joined 2 years ago
[–] sugar_in_your_tea@sh.itjust.works 35 points 15 hours ago (4 children)

When AWS went down, users lost access to the app that manages its water-cooled coils, leaving them stuck with whatever setting was last active.

That's ridiculous. The app should merely talk to the device over wifi, if available. The cloud should only be used to connect from outside the wifi network.

Why is everything so crappy?

The only true part is they are alone.

Don't need arms to make babies...

I, too, like Dutch people.

Ok, so it's effectively the same as P2P, just with some guarantees about how many copies you have.

In a P2P setup, your data would be distributed based on some mathematical formula such that it's statistically very unlikely that your data would be lost given N clients disconnect from the network. The larger the network, the more likely your data is to stick around. So think of bittorrent, but you are randomly selected to seed some number of files, in addition to files you explicitly opt into.

The risk w/ something like Nostr is if a lot of people pick the same relays, and those relays go down. With the P2P setup I described, data would be distributed according to a mathematical formula, not human decision, so you're more likely to still have access to that data even if a whole country shuts off its internet or something.

Either solution is better than Lemmy/Mastodon or centralized services in terms of surviving something like AWS going down.

Simplex is ready today, assuming you just want 1:1 messaging.

[–] sugar_in_your_tea@sh.itjust.works 0 points 1 day ago (2 children)

How sure are you? Assign a percentage chance to it and the cost of exposing old messages, and compare that to the cost of this dev effort.

We know governments are using it, and there's likely a lot of sensitive data transmitted through Signal, so the cost of it happening in the next 20 years would still be substantial, so even if the chance of that timeline happening is small, there's still value in investing in forward secrecy.

[–] sugar_in_your_tea@sh.itjust.works 1 points 1 day ago (2 children)

Monero isn't like the other three, it's P2P with no single points of failure.

I haven't looked too closely at Nostr, but I'm assuming it's typically federated with relays acting like Lemmy/Mastodon instances in terms of data storage (it's a protocol, so I suppose posts could be local and switching relays is easy). If your instance goes down, you're just as screwed as you would be with a centralized service, because Lemmy and Mastodon are centralized services that share data. If your instance doesn't go down but a major one does, your experience will be significantly degraded.

The only way to really solve this problem is with P2P services, like Monero, or to have sufficient diversity in your infrastructure that a single major failure doesn't kill the service. P2P is easy for something like a currency, but much more difficult for social media where you expect some amount of moderation, and redundancy is expensive and also complex.

[–] sugar_in_your_tea@sh.itjust.works 2 points 1 day ago* (last edited 1 day ago)

Yup, I remember reading the manual on the toilet while my sibling was playing so when it was my turn, I'd have a leg up. We would take turns, cheering each other on as we got past a difficult part, and sharing secrets that we found.

With the internet, I can just look up a walkthrough pretty soon after the game launches, so I have no reason to look at the manual (if there is one) or talk to anyone else.

I think that's why competitive MP has taken off. People want that social experience, and that's filling in for what used to exist. I remember PvP being a thing, but I also remember helping each other out on a SP game being a thing, so both were social activities (if it wasn't a sibling, it was a friend or coworker).

[–] sugar_in_your_tea@sh.itjust.works 1 points 2 days ago* (last edited 2 days ago)

Unit tests aren't intended to find bugs, they're intended to prove correctness. There should be a separate QA process for finding bugs, which involves integration testing. When QA inevitably finds a bug, the unit tests get updated with that case (and any similar cases).

only cover cases that you know will work

And that's what code reviews are for. If your tests don't sufficiently cover the logic, the change should be rejected until they do. It's a lot easier to verify the tests cover the logic if the tests are submitted w/ the logic changes.

I did it in a few weeks. I basically swapped discs while playing games, before going to work, before bed, etc. It was tedious, but I got them all.

Now when I buy one, I'll rip it first before watching.

[–] sugar_in_your_tea@sh.itjust.works 1 points 2 days ago* (last edited 2 days ago)

Yup, my SO isn't very technically inclined, so describing anything I do in simple terms makes it sound super lame.

For example, I'm working on a P2P reddit clone, and here's how it goes:

Me: It's like reddit, but there's no website and it's all on your computer.

SO: So a note pad?

Me: No, it's also on other people's computers too, anything I wrote shows up instantly on their computer.

SO: Notepad with Dropbox?

Me: No, I don't have all of the data, and neither does anyone else. We only get what we subscribe to. Like reddit!

SO: So Dropbox, but with multiple documents.

Me: Sure...

SO: (pats me on the head) I'm glad you're excited about it, now do the dishes.

21
openSUSE Leap 16 Enters Beta (news.opensuse.org)
submitted 5 months ago* (last edited 1 month ago) by sugar_in_your_tea@sh.itjust.works to c/opensuse@lemmy.world
 

I didn't notice this until the other post about them potentially deprecating YaST (at least putting in on maintenance mode). I figured we could use a thread to discuss other changes coming in Leap 16.

 

Current setup:

  • one giant docker compose file
  • Caddy TLS trunking
  • only exposed port is Caddy

I've been trying out podman, and I got a new service running (seafile), and I did it via podman generate kube so I can run it w/ podman kube play. My understanding is that the "podman way" is to use quadlets, which means container, network, etc files managed by systemd, so I tried out podlet podman kube play to generate a systemd-compatible file, but it just spat out a .kube file.

Since I'm just starting out, it wouldn't be a ton of work to convert to separate unit files, or I can continue with the .kube file way. I'm just not sure which to do.

At the end of this process, here's what I'd like in the end:

  • Caddy is the only exposed port - could block w/ firewall, but it would be nice if they worked over a hidden network
  • each service works as its own unit, so I can reuse ports and whatnot - I may move services across devices eventually, and I'd rather not have to remember custom ports and instead use host names
  • automatically update images - shouldn't change the tag, just grab the latest from that tag

Is there a good reason to prefer .kube over .container et al or vice versa? Which is the "preferred" way to do this? Both are documented on the same "quadlet" doc page, which just describes the acceptable formats. I don't think I want kubernetes anytime soon, so the only reason I went that way is because it looked similar to compose.yml and I saw a guide for it, but I'm willing to put in some work to port from that if needed (and the docs for the kube yaml file kinda sucks). I just want a way to ship around a few files so moving a service to a new device is easy. I'll only really have like 3-4 devices (NAS, VPS, and maybe an RPi or two), and I currently only have one (NAS).

Also, is there a customary place to stick stuff like config files? I'm currently using my user's home directory, but that's not great long-term. I'll rarely need to touch these, so I guess I could stick them on my NAS mount (currently /srv/nas/) next to the data (/srv/nas//). But if there's a standard place to stick this, I'd prefer to do that.

Anyway, just looking for an opinionated workflow to follow here. I could keep going with the kube yaml file route, or I could switch to the .container route, I don't mind either way since I'm still early in the process. I'm currently thinking of porting to the .container method to try it out, but I don't know if that's the "right" way or if ".kube` with a yaml config is the "right" way.

 

Apparently US bandwidth was reduced to 1TB for their base plan, though they have 20TB for the same plan in Europe. I don't use much bandwidth right now, but I could need more in the future depending on how I do backups and whatnot.

So I'm shopping around in case I need to make a switch. Here's what I use it for:

  • VPN to get around CGNAT - so all traffic for my internal services goes through it
  • HAProxy - forwards traffic to my various services
  • small test servers - very low requirements, basically just STUN servers
  • low traffic blog

Hard requirements:

  • custom ISO, or at least openSUSE support
  • inexpensive - shooting for ~$5/month, I don't need much
  • decent bandwidth (bare minimum 50mbps, ideally 1gbps+), with high-ish caps - I won't use much data most of the time (handful of GB), but occasionally might use 2-5TB

Nice to have:

  • unmetered/generous bandwidth - would like to run a Tor relay
  • inexpensive storage - need to put my offsite backups somewhere
  • API - I'm a nerd and like automating things :)
  • location near me - I'm in the US, so anywhere in NA works

Not needed:

  • fast processors
  • lots of RAM
  • loose policies around torrenting and processing (no crypto or piracy here)
  • support features, recipes, etc - I can figure stuff out on my own

I'll probably stick with Hetzner for now because:

  • pricing is still fair (transfer is in line with competitors)
  • can probably move my server to Germany w/o major issues for more bandwidth
  • they hit all of the other requirements, nice to haves, and many unneeded features

Anyway, thoughts? The bandwidth change pisses me off, so let me know if there's a better alternative.

 

I found the graph at 10:55 to be especially interesting because it shows how someone with around the median income ($65k) can make it to the lower upper class by retirement through some discipline (10% saved per year).

As a quick TL;DW, here are the median incomes, net worth, and percent of population for each class:

  • lower - $34k income, $3.4k net worth (many are negative) - 25%
  • middle
    • lower - $44k income, $71k net worth - 20%
    • middle - $81k income, $159k net worth - 20%
    • upper - $117k income, $307k net worth - 20%
  • upper
    • lower - $189k income, $747k net worth - 10%
    • upper - $378k income, $2.5M net worth - 5%

Some questions to spark discussion:

  • Do you agree with his breakdown of the economic classes? Why or why not?
  • What strategies do you think someone in each category should take to improve their situation?
  • If you don't mind sharing, what class do you think you're in, and does the breakdown match your experience?
 

Here's what I currently have:

  • Ryzen 1700 w/ 16GB RAM
  • GTX 750 ti
  • 1x SATA SSD - 120GB, currently use <50GB
  • 2x 8TB SATA HDD
  • runs openSUSE Leap, considering switch to microOS

And main services I run (total disk usage for OS+services - data is :

  • NextCloud - possibly switch to ownCloud infinite scale
  • Jellyfin - transcoding is nice to have, but not required
  • samba
  • various small services (Unifi Controller, vaultwarden, etc)

And services I plan to run:

  • CI/CD for Rust projects - infrequent builds
  • HomeAssistant
  • maybe speech to text? I'm looking to build an Alexa replacement
  • Minecraft server - small scale, only like 2-3 players, very few mods

HW wishlist:

  • 16GB RAM - 8GB may be a little low longer term
  • 4x SATA - may add 2 more HDDs
  • m.2 - replace my SATA SSD; ideally 2x for RAID, but I can do backups; performance isn't the concern here (1x sata + PCIe would work)
  • dual NIC - not required, but would simplify router config for private network; could use USB to Eth dongle, this is just for security cameras and whatnot
  • very small - mini-ITX at the largest; I want to shove this under my bed
  • very quiet
  • very low power - my Ryzen 1700 is overkill, this is mostly for the "quiet" req, but also paying less is nice

I've heard good things about N100 devices, but I haven't seen anything w/ 4x SATA or an accessible PCIe for a SATA adapter.

The closest I've seen is a ZimaBlade, but I'm worried about:

  • performance, especially as a CI server
  • power supply - why couldn't they just do regular USB-C?
  • access to extra USB ports - its hidden in the case

I don't need x86 for anything, ARM would be fine, but I'm having trouble finding anything with >8GB RAM and SATA/PCIe options are a bit... limited.

Anyway, thoughts?

86
submitted 1 year ago* (last edited 1 year ago) by sugar_in_your_tea@sh.itjust.works to c/thefarside@sh.itjust.works
 

Horse styles of the ’50s

33
submitted 1 year ago* (last edited 1 year ago) by sugar_in_your_tea@sh.itjust.works to c/thefarside@sh.itjust.works
 

For crying out loud, Jonah! Three days late, covered with slime, and smelling like fish! … And what story have I got to swallow this time?

-1
submitted 1 year ago* (last edited 1 year ago) by sugar_in_your_tea@sh.itjust.works to c/thefarside@sh.itjust.works
 

You know what I’m sayin’? … Me, for example. I couldn’t work in some stuffy little office. … The outdoors just calls to me.

5
submitted 1 year ago* (last edited 1 year ago) by sugar_in_your_tea@sh.itjust.works to c/thefarside@sh.itjust.works
 

Look! Look, gentlemen! Purple mountains! Spacious skies! Fruited plains! … Is someone writing this down?

2
submitted 1 year ago* (last edited 1 year ago) by sugar_in_your_tea@sh.itjust.works to c/thefarside@sh.itjust.works
 

Sure, I’m a creature—and I can accept that … but lately it seems I’ve been turning into a miserable creature.

view more: next ›