TCB13

joined 2 years ago
[–] TCB13@lemmy.world 0 points 2 days ago

In the same order you asked:

Safe, easy to use, polished and reliable ; Someone else tried to emulate the first one success ; Poorly executed open-source alternative you can selfhost.

All of them speak the git and are essentially web UIs made to manage it and the creation of repositories and setting up permissions.

[–] TCB13@lemmy.world 0 points 2 weeks ago

Debian repositories include the dav module by default. Not sure about what’s going on with docker.

[–] TCB13@lemmy.world 1 points 2 weeks ago (2 children)

Nginx is easy to setup as WebDAV server.

[–] TCB13@lemmy.world 0 points 3 weeks ago* (last edited 3 weeks ago)

I've the exact same problem. Can't post anything new, but comments work.

[–] TCB13@lemmy.world 0 points 3 weeks ago

Until you find they’re using and old version of macOS… or Windows 8. 😂

[–] TCB13@lemmy.world 5 points 1 month ago (3 children)

All his files are secure and properly synced... unlike Nextcloud.

[–] TCB13@lemmy.world 2 points 1 month ago (5 children)

Spamassassin is useless these days, you better be using rspamd.

https://workaround.org/ispmail-bookworm/catching-spam-with-rspamd/

[–] TCB13@lemmy.world 2 points 1 month ago (1 children)

Some people can't because they need updated proofing tools and that version no longer has updates.

[–] TCB13@lemmy.world 0 points 1 month ago* (last edited 1 month ago)

They do lock you in on handheld devices but that seems to be a consequence of the fact that they are storing all emails encrypted on the server. After reading this link (“[…]Since IMAP can’t decrypt your emails[…]”), I agree that they are just implementing PGP with an extra steps and creating an unneeded layer (the bridge).

Yes, that's precisely the problem there. You can use PGP with any generic IMAP provider and that will work just fine with handheld devices. There are multiple mail clientes capable of doing and all your mail is still encrypted on the server. Proton just made an alternative implementation that forces you into proprietary systems because it's more convenient for them.

Those kinds of setups with servers encrypting your mail and still delivering over IMAP are fairly easy to implement, here's an example. They simply decided to go all proprietary.

The reason I would not compare it to XMPP is because they are still using SMTP. It is when they stop using SMTP or force others to use something e

On a generic mail system SMTP is used in two places: 1) from your mail client to your provider and 2) between your provider and other providers. Proton is NOT using SMPT for the first step, making it non-standard and much more closed.

[–] TCB13@lemmy.world 1 points 1 month ago (2 children)

I want to learn about PGP and how to encrypt email. Someone sells that service, great. And it is not like I cannot send normal emails to anyone else.

I don't disagree with you, I believe it as well. PGP is it stands is cumbersome.

The thing is that could've still implemented a easy-to-use, "just login and send email" type of web client and abstracted the user from the PGP complexities while still delivering everything over IMAP/SMTP.

They are using the same standard, not some made up version of SMTP (when sending to other servers, I assume any email from client A to client B both being Proton customer never leave their server, so no need for a new protocol).

You assume correctly, but when your mail client is trying to send an email instead of using SMTP to submit to their server, you're using a proprietary API in a proprietary format and the same goes for receiving email.

This is well documented and to prove it further if you want to configure Proton in a generic mail client like Thunderbird then you're required to install a "birdge", a piece of software that essentially simulates a local IMAP and SMPT server (that Thunderbird communicates with) and then will convert those requests into requests their proprietary API understands. There are various issues with this approach the most obvious one is that it is an extra step, there's also the issue that in iOS for eg. you're forced to use their mail app because you can't run the bridge there.

The bridge is an afterthought to support generic email clients and generic protocols, only works how and where they say it should work and may be taken away at any point.

while being fully open source using open standards

Delivering your data over proprietary APIs doesn't count as "open standards" - sorry.

[–] TCB13@lemmy.world 0 points 1 month ago* (last edited 1 month ago)

Would it be inaccurate to say that your fear is that Proton pulls an “Embrace, Extend, Extinguish” move?

No, it isn't. But they never "embraced" as there was never direct IMAP to their servers, instead it's a proprietary API serving data in a proprietary format.

I also see how that would make Proton like WhatsApp, which has its own protocol and locks its users in.

The problem isn't that taking down the bridge would make Proton like WhatsApp. It's the other way around, when they decided to build their internals with proprietary protocols and solutions instead eg. IMAP+SMTP they became the WhatsApp. Those things shouldn't be addons or an afterthought, they should be bult into the core.

This clearly shows that making open solutions ranks very low their company and engineering priority list. If it was at the top they would've built it around IMAP instead.

I could download an archive of everything I have on Proton without a hitch.

Yes you can, but the data will come in more property formats hard to upload to anywhere else - at least for some of the data. They've improved this situation but it's still less than ideal. In the beginning they would export contacts and calendars in some JSON format, I see they moved to vCard and iCal now.

 

cross-posted from: https://lemmy.world/post/23071801

Considering a lot of people here are self-hosting both private stuff, like a NAS and also some other is public like websites and whatnot, how do you approach segmentation in the context of virtual machines versus dedicated machines?

This is generally how I see the community action on this:

Scenario 1: Fully Isolated Machine for Public Stuff

Two servers one for the internal stuff (NAS) and another for the public stuff totally isolated from your LAN (websites, email etc). Preferably with a public IP that is not the same as your LAN and the traffic to that machines doesn't go through your main router. Eg. a switch between the ISP ONT and your router that also has a cable connected for the isolated machine. This way the machine is completely isolated from your network and not dependent on it.

Scenario 2: Single server with VM exposed

A single server hosting two VMs, one to host a NAS along with a few internal services running in containers, and another to host publicly exposed websites. Each website could have its own container inside the VM for added isolation, with a reverse proxy container managing traffic.

For networking, I typically see two main options:

  • Option A: Completely isolate the "public-facing" VM from the internal network by using a dedicated NIC in passthrough mode for the VM;
  • Option B: Use a switch to deliver two VLANs to the host—one for the internal network and one for public internet access. In this scenario, the host would have two VLAN-tagged interfaces (e.g., eth0.X) and bridge one of them with the "public" VM’s network interface. Here’s a diagram for reference: https://ibb.co/PTkQVBF

In the second option, a firewall would run inside the "public" VM to drop all inbound except for http traffic. The host would simply act as a bridge and would not participate in the network in any way.

Scenario 3: Exposed VM on a Windows/Linux Desktop Host

Windows/Linux desktop machine that runs KVM/VirtualBox/VMware to host a VM that is directly exposed to the internet with its own public IP assigned by the ISP. In this setup, a dedicated NIC would be passed through to the VM for isolation.

The host OS would be used as a personal desktop and contain sensitive information.

Scenario 4: Dual-Boot Between Desktop and Server

A dual-boot setup where the user switches between a OS for daily usage and another for hosting stuff when needed (with a public IP assigned by the ISP). The machine would have a single Ethernet interface and the user would manually switch network cables between: a) the router (NAT/internal network) when running the "personal" OS and b) a direct connection to the switch (and ISP) when running the "public/hosting" OS.

For increased security, each OS would be installed on a separate NVMe drive, and the "personal" one would use TPM with full disk encryption to protect sensitive data. If the "public/hosting" system were compromised.

The theory here is that, if properly done, the TPM doesn't release the keys to decrypt the "personal" disk OS when the user is booted into the "public/hosting" OS.

People also seem to combine both scenarios with Cloudflare tunnels or reverse proxies on cheap VPS.


What's your approach / paranoia level :D

Do you think using separate physical machines is really the only sensible way to go? How likely do you think VM escape attacks and VLAN hopping or other networking-based attacks are?

Let's discuss how secure these setups are, what pitfalls one should watch out for on each one, and what considerations need to be addressed.

 

Considering a lot of people here are self-hosting both private stuff, like a NAS and also some other is public like websites and whatnot, how do you approach segmentation in the context of virtual machines versus dedicated machines?

This is generally how I see the community action on this:

Scenario 1: Air-gapped, fully Isolated Machine for Public Stuff

Two servers one for the internal stuff (NAS) and another for the public stuff totally isolated from your LAN (websites, email etc). Preferably with a public IP that is not the same as your LAN and the traffic to that machines doesn't go through your main router. Eg. a switch between the ISP ONT and your router that also has a cable connected for the isolated machine. This way the machine is completely isolated from your network and not dependent on it.

Scenario 2: Single server with VM exposed

A single server hosting two VMs, one to host a NAS along with a few internal services running in containers, and another to host publicly exposed websites. Each website could have its own container inside the VM for added isolation, with a reverse proxy container managing traffic.

For networking, I typically see two main options:

  • Option A: Completely isolate the "public-facing" VM from the internal network by using a dedicated NIC in passthrough mode for the VM;
  • Option B: Use a switch to deliver two VLANs to the host—one for the internal network and one for public internet access. In this scenario, the host would have two VLAN-tagged interfaces (e.g., eth0.X) and bridge one of them with the "public" VM’s network interface. Here’s a diagram for reference: https://ibb.co/PTkQVBF

In the second option, a firewall would run inside the "public" VM to drop all inbound except for http traffic. The host would simply act as a bridge and would not participate in the network in any way.

Scenario 3: Exposed VM on a Windows/Linux Desktop Host

Windows/Linux desktop machine that runs KVM/VirtualBox/VMware to host a VM that is directly exposed to the internet with its own public IP assigned by the ISP. In this setup, a dedicated NIC would be passed through to the VM for isolation.

The host OS would be used as a personal desktop and contain sensitive information.

Scenario 4: Dual-Boot Between Desktop and Server

A dual-boot setup where the user switches between a OS for daily usage and another for hosting stuff when needed (with a public IP assigned by the ISP). The machine would have a single Ethernet interface and the user would manually switch network cables between: a) the router (NAT/internal network) when running the "personal" OS and b) a direct connection to the switch (and ISP) when running the "public/hosting" OS.

For increased security, each OS would be installed on a separate NVMe drive, and the "personal" one would use TPM with full disk encryption to protect sensitive data. If the "public/hosting" system were compromised.

The theory here is that, if properly done, the TPM doesn't release the keys to decrypt the "personal" disk OS when the user is booted into the "public/hosting" OS.

People also seem to combine both scenarios with Cloudflare tunnels or reverse proxies on cheap VPS.


What's your approach / paranoia level :D

Do you think using separate physical machines is really the only sensible way to go? How likely do you think VM escape attacks and VLAN hopping or other networking-based attacks are?

Let's discuss how secure these setups are, what pitfalls one should watch out for on each one, and what considerations need to be addressed.

 

cross-posted from: https://lemmy.world/post/21563379

Hello,

I'm looking for a high resolution image of the PAL cover from the Dreamcast (I believe).

There was this website covergalaxy that used it have in 2382x2382 but all the content seems to be gone. Here's the cache https://ibb.co/nRMhjgw . Internet archive doesn't have it.

Much appreciated!

 

Hello,

I'm looking for a high resolution image of the PAL cover from the Dreamcast (I believe).

There was this website covergalaxy that used it have in 2382x2382 but all the content seems to be gone. Here's the cache https://ibb.co/nRMhjgw . Internet archive doesn't have it.

Much appreciated!

 

The most severe restrictions to the general public are imposed within a 20-mile (32 km) radius of the Green Bank Observatory.[5] The Observatory polices the area actively for devices emitting excessive electromagnetic radiation such as microwave ovens, Wi-Fi access points and faulty electrical equipment and request citizens discontinue their usage. It does not have enforcement power[6] (although the FCC can impose a fine of $50 on violators[7]), but will work with residents to find solutions.

 

As a middle ground, we could implement a solution for the bottom tier: small to medium sites and blogs. These sites don’t necessarily need a full-fledged MySQL database.

SQLite seems to be the perfect fit:

  • It is the most widely used database worldwide
  • It is cross-platform and can run on any device
  • It is included by default on all PHP installations (unless explicitly disabled)
  • WordPress’s minimum requirements would be a simple PHP server, without the need for a separate database server.
  • SQLite support enables lower hosting costs, decreases energy consumption, and lowers performance costs on lower-end servers.

What would the benefits of SQLite be?

Officially supporting SQLite in WordPress could have many benefits. Some notable ones would include:

  • Increased performance on lower-end servers and environments.
  • Potential for WordPress growth in markets where we did not have access due to the system’s requirements.
  • Potential for growth in the hosting market using installation “scenarios”.
  • Reduced energy consumption – increased sustainability for the WordPress project.
  • Further WordPress’s mission to “democratize publishing” for everyone.
  • Easier to contribute to WordPress – download the files and run the built-in PHP server without any other setup required.
  • Easier to use automated tests suite.
  • Sites can be “portable” and self-contained.

Source and other links:

-98
submitted 7 months ago* (last edited 7 months ago) by TCB13@lemmy.world to c/linux@lemmy.ml
 

New GNOME dialog on the right:

Apple's dialog:

They say GNOME isn't a copy of macOS but with time it has been getting really close. I don't think this is a bad thing however they should just admit it and then put some real effort into cloning macOS instead of the crap they're making right now.

Here's the thing: Apple's design you'll find that they carefully included an extra margin between the "Don't Save" and "Cancel" buttons. This avoid accidental clicks on the wrong button so that people don't lose their work when they just want to click "Cancel".

So much for the GNOME, vision and their expert usability team :P

 

Hi,

Is there anyone using Amcrest IP4M-1041B with Home Assistant? I've a few questions about software and integration.

  1. From what I hear, this camera can be setup 100% offline, connected via cable to any computer and by using a built in WebUI the camera has, is this true?

  2. It offers pan, tilt or zoom. Does it work really good with HA? Can it be operated without any Amcrest software / internet connection?

  3. The features above allow you to set preset locations, can that be done on HA / WebUI / without the Amcrest app as well?

  4. Does it really operate all features offline and is it reliable? Eg. motion detection works as expected / doesn't miss events?

  5. What's your overall experience with the camera? Does it compare to let's say a TP-Link tapo?

Thank you.

 

cross-posted from: https://lemmy.world/post/14398634

Unfortunately I was proven to be right about Riley Testut. He's yet another greedy person barely batter than Apple. After bitching to Apple to remove GBA4iOS from the App Store he's now leveraging Delta to force people into his AltStore.

Delta has finally made its way to the App Store. Additionally, the Delta developer has also published their alternative marketplace, AltStore, in the EU today.

If you're in the EU you'll only be able to get Delta on the AltStore and that requires:

This is complete bullshit he could've just launched Delta on the App Store in Europe as well but he decided not to.

Thanks Riley Testut for being a dick to the people that actually forced Apple into allowing alternative app stores in the first place.


Github issue related to this dick move: https://github.com/rileytestut/Delta/issues/292

 

cross-posted from: https://lemmy.world/post/14398634

Unfortunately I was proven to be right about Riley Testut. He's yet another greedy person barely batter than Apple. After bitching to Apple to remove GBA4iOS from the App Store he's now leveraging Delta to force people into his AltStore.

Delta has finally made its way to the App Store. Additionally, the Delta developer has also published their alternative marketplace, AltStore, in the EU today.

If you're in the EU you'll only be able to get Delta on the AltStore and that requires:

This is complete bullshit he could've just launched Delta on the App Store in Europe as well but he decided not to.

Thanks Riley Testut for being a dick to the people that actually forced Apple into allowing alternative app stores in the first place.


Github issue related to this dick move: https://github.com/rileytestut/Delta/issues/292

 

Unfortunately I was proven to be right about Riley Testut. He's yet another greedy person barely batter than Apple. After removed to Apple to remove GBA4iOS from the App Store he's now leveraging Delta to force people into his AltStore.

Delta has finally made its way to the App Store. Additionally, the Delta developer has also published their alternative marketplace, AltStore, in the EU today.

If you're in the EU you'll only be able to get Delta on the AltStore and that requires:

This is complete bullshit he could've just launched Delta on the App Store in Europe as well but he decided not to.

Thanks Riley Testut for being a dick to the people that actually forced Apple into allowing alternative app stores in the first place.


Github issue related to this dick move: https://github.com/rileytestut/Delta/issues/292

 

Here's my take:

The domain aftermarket has a big problem... it exists. This market shouldn't ever be allowed to exist in the first place. ICANN should've blocked this bullshit a long time ago and forced registrars to just let domains expire and free the space. Also add a few provisions about unused domain names and about selling them.

view more: next ›