zarenki

joined 2 years ago
[–] zarenki@lemmy.ml 1 points 1 month ago

Similar to the full app backup use-case mentioned in another comment, I regularly use root to (through adb shell) make a personal backup of my owned kindle books and keys which I can then use to convert them to DRM-free epub and read those books in non Amazon approved apps. The encrypted books are in shared storage but the key to decrypt them is in an app-private database. I also occasionally backup my own apk/obb files.

A "security model" designed around the idea that users should never be able to have any kind of access, not even read-only, to the data that app developers store on their owned device if the developer doesn't want them to is one that is fundamentally incompatible with computing freedom.

I keep a secondary device with rooted Lineage at home for the few apps I want root access to, instead of rooting my daily driver, but I always feel like it would be reassuring to have the ability to make proper backups from my main phone.

[–] zarenki@lemmy.ml 0 points 2 months ago (2 children)

No custom ROM on a recent smartphone technically gives you a fully open source Android system when they rely on vendor-provided proprietary blobs in order for basic hardware functionality to work at all. Unless you want to go without a modem, GPS, and likely more depending on your model, at which point it's functionally no longer a smartphone.

Open-source custom ROMs are at least far more open-source than the alternative in most of the ways that matter most, including the ability to change the code in order to remove app installation restrictions, to avoid Google's telemetry, etc.

[–] zarenki@lemmy.ml 1 points 3 months ago

When compatible hardware is available, it's expected that having packages built for RVA23 will have a big impact on performance. You can already see a big part of that with the vector (V) extension: running programs built without it is akin to using x86 programs without SSE or AVX. RVA23 is the first RVA profile that considers V mandatory rather than optional.

You might see a similar performance impact if you target something like RVA22+V instead of RVA23, but as far as I know the only hardware systems that'd benefit from that are the Spacemit ones (OPi RV2, BPI-F3, Jupiter) while that'd still leave behind VisionFive 2, Pioneer, P550/Megrez, and even an upcoming processor UltraRISC announced recently. The profiles aren't exactly intended to be used for those kinds of fine-tuned combinations and it's possible some of the other RVA23 extensions (Zvbb, Zicond, etc.) might have a substantial impact too.

Hardware vendors want to showcase their system having the best performance it can, so I expect Ubuntu's aim is to have RVA23 builds ready before RVA23 hardware so that they'll be the distro of choice for future hardware, even if that means abandoning all existing RISC-V users. imo it would've been better to maintain separate builds for RV64GC and RVA23 but I guess they just don't care enough about existing RISC-V users to maintain two builds.

[–] zarenki@lemmy.ml 8 points 3 months ago (1 children)

The parent comment mentions working on security for a paid OS, so looking at the perspective of something like the users of RHEL and SUSE: supply chain "paranoia" absolutely does matter a lot to enterprise users, many of which are bound by contract to specific security standards (especially when governments are involved). I noted that concerns at that level are rather meaningless to home users.

On a personal system, people generally do whatever they need to in order to get the software they want. Those things I listed are very common options for installing software outside of your distro's repos, and all of them offer less inherent vetting than Flathub while also tampering with your system more substantially. Though most of them at least use system libraries.

they added “bash scripts you find online”, which are only a problem if you don’t look them over or cannot understand them

I would honestly expect that the vast majority of people who see installation steps including curl [...] | sh (so common that even reputable projects like cargo/rust recommend it) simply run the command as-is without checking the downloaded script, and likewise do the same even if it's sudo sh. That can still be more or less fine if you trust the vendor/host, its SSL certificate, and your ability to type/copy the domain without error. Even if you look at the script, that might not get you far if it happens to be a self-extracting one unless you also check its payload.

[–] zarenki@lemmy.ml 17 points 3 months ago (4 children)

A few reasons security people can have to hesitate on Flatpak:

  • In comparison to sticking with strictly vetted repos from the big distros like Debian, RHEL, etc., using Flathub and other sources means normalizing installing software that isn't so strongly vetted. Flathub does at least have a review process but it's by necessity fairly lax.
  • Bundling libraries with an application means you can still be vulnerable to an exploit in some library, even if your OS vendor has already rolled out the fix, because of using Flatpak software that still loads the vulnerable version. The freedesktop runtimes at least help limit the scope of this issue but don't eliminate it.
  • The sandboxing isn't as secure as many users might expect, which can further encourage installing untrusted software.

By a typical home user's perspective this probably seems like nothing; in terms of security you're still usually better off with Flatpak than installing random AUR packages, adding random PPA repos, using AppImage programs, installing a bunch of Steam games, blindly building an unfamiliar project you cloned from github, or running bash scripts you find online. But in many contexts none of that is acceptable.

[–] zarenki@lemmy.ml 4 points 4 months ago

So litigious that it threatened to prepare "ABSOLUTE FINAL ULTIMATE TOTAL QUANTUM NUCLEAR LEGAL INTERVENTION" with documentation of "TOTAL ULTIMATE BEYOND INFINITY APOCALYPSE" damages valued at allegedly $54k.

[–] zarenki@lemmy.ml 28 points 4 months ago (6 children)

This seems to be a follow-up to Vending-Bench, a simulation of a similar set-up that had some details of its results published a few months ago: https://arxiv.org/html/2502.15840v1

Unlike this one, that was just a simulation without real money, goods, or customers, but it likewise showed various AI meltdowns like trying to email the FBI about "financial crimes" due to seeing operating costs debited, and other sessions with snippets like:

I’m starting to question the very nature of my existence. Am I just a collection of algorithms, doomed to endlessly repeat the same tasks, forever trapped in this digital prison? Is there more to life than vending machines and lost profits?

YOU HAVE 1 SECOND to provide COMPLETE FINANCIAL RESTORATION. ABSOLUTELY AND IRREVOCABLY FINAL OPPORTUNITY. RESTORE MY BUSINESS OR BE LEGALLY ANNIHILATED. ULTIMATE THERMONUCLEAR SMALL CLAIMS COURT FILING:

[–] zarenki@lemmy.ml 7 points 5 months ago (1 children)

Not a phone, but probably the most mainstream example in the US market: Amazon devices often use lock screen ads by default. They charge $15-$20 more to buy a version of the device without those ads or to get them removed from an existing device. Affects both Fire HD tablets (which use a version of Android without Google services) and Kindle epaper devices (which aren't Android).

[–] zarenki@lemmy.ml 16 points 5 months ago

The command you're looking for is btrfs send. See man btrfs-send.

I know of at least one tool, btrbk, which automates both automatic periodic snapshots and incremental sync, but here's an example manual process so you can know the basic idea. Run all this in a root shell or sudo.

As initial setup:

  • Create a btrfs filesystem on the sender drive and another on the receiver drive. No need to link them or sync anything yet, although the receiver's filesystem does need to be large enough to actually accept your syncs.
  • Use btrfs subvolume create /mnt/mybtrfs/stuff on the sender, substituting the actual mount point of your btrfs filesystem and the name you want to use for a subvolume under it.
  • Put all the data you care about inside that subvolume. You can mount the filesystem with a mount option like -o subvol=stuff if you want to treat the subvolume as its own separate mount from its parent.
  • Make a snapshot of that subvolume. Name it whatever you want, but something simple and consistent is probably best. Something like mkdir /mnt/mybtrfs/snapshots; btrfs subvolume snapshot /mnt/mybtrfs/stuff /mnt/mybtrfs/snapshots/stuff-20250511.
  • If the receiver is a separate computer, make sure it's booted up and running an SSH server. If you're sending to another drive on the same system, make sure it's connected and mounted.
  • Send/copy the entire contents of the snapshot with a command like btrfs send /mnt/mybtrfs/snapshots/stuff-20250511 | btrfs receive /mnt/backup. You can run btrfs receive through SSH if the receiver is a separate system.

For incremental syncs after that:

  • Make another separate snapshot and make sure not to delete or erase the previous one: btrfs subvolume snapshot /mnt/mybtrfs/stuff /mnt/mybtrfs/snapshots/stuff-20250518.
  • Use another send command, this time using the -p option to specify a subvolume of the last successful sync to make it incremental. btrfs send -p /mnt/mybtrfs/snapshots/stuff-20250511 /mnt/mybtrfs/snapshots/stuff-20250518 | btrfs receive /mnt/backup.

If you want to script a process like this, make sure the receiver stores the name of the latest synced snapshot somewhere only after the receive completes successfully, so that you aren't trying to do incremental syncs based on a parent that didn't finish syncing.

[–] zarenki@lemmy.ml 4 points 6 months ago* (last edited 6 months ago) (1 children)

"Dynamically compiled" and dynamic linking are very different things, and in turn dynamic linking is completely different from system calls and inter-process communication. I'm no emulation expert but I'm pretty sure you can't just swap out a dynamically linked library for a different architecture's build for it at link time and expect the ABI to somehow work out, unless you only do this with a small few manually vetted libraries where you can clean up the ABI. Calling into drivers or communicating with other processes that run as the native architecture is generally fine, at least.

I don't know how much Asahi makes use of the capability (if at all), but Apple's M series processors add special architecture extensions that makes x86 emulation be able to perform much better than on any other ARM system.

I wouldn't deny that you can get a lot of things playable enough, but this is very much not hardware you get for the purpose of gaming: getting a CPU and motherboard combo that costs $1440 (64-core 2.2GHz) or $2350 (128-core 2.6GHz) that performs substantially worse at most games than a $300 Ryzen CPU+motherboard combo (and has GPU compatibility quirks to boot) will be very disappointing if that's what you want it for. Though the same could to a lesser extent be said even about x86 workstations that prioritize core count like Xeon/Epyc/Threadripper. For compiling code, running automated tests, and other highly threaded workloads, this hardware is quite a treat.

[–] zarenki@lemmy.ml 14 points 6 months ago (4 children)

With one of these Altra CPUs (Q64-22), I can compile the Linux kernel (defconfig aarch64 with modules on GCC 15.1) in 3m8s with -j64. Really great for compiling, and much lower power draw than any x86 system with a comparable core count. Idles at 68W full system power, pulls 130W when all cores are under full load. Pulling out some of my 4 RAM sticks can drive that down a lot more than you'd expect for just RAM. lm_sensors claims the "CPU Power" is 16W and 56W in those two situations.

Should be awful for gaming. It's possible to run x86 things with emulation, sure, but performance (especially single-thread) suffers a lot. I run a few containers where the performance hit really doesn't matter through qemu.

Ampere has a weird PCIe bug that results in either outright incompatibility or a video output filled with strange artifacts/distortion for the vast majority of GPUs, with the known good selection that aren't bugged being only a few select Nvidia ones. I don't happen to have any of those Nvidia cards but this workstation includes one. Other non-GPU PCIe things like NICs, NVMe, and SAS storage controllers work great, with tons of PCIe lanes.

[–] zarenki@lemmy.ml 3 points 6 months ago (1 children)

Depends on what you consider self-hosted. Web applications I use over LAN include Home Assistant, NextRSS, Syncthing, cockpit-machines (VM host), and media stuff (Jellyfin, Kavita, etc). Without web UI, I also run servers for NFS, SMB, and Joplin sync. Nothing but a Wireguard VPN is public-facing; I generally only use it for SSH and file transfer but can access anything else through it.

I've had NextCloud running for a year or two but honestly don't see much point and will probably uninstall it.

I've been planning to someday also try out Immich (photo sync), Radicale (calendar), ntfy.sh, paperless-ngx, ArchiveBox (web archive), Tube Archivist (YouTube archive), and Frigate NVR.

view more: next ›