antimidas

joined 2 years ago
[–] antimidas@sopuli.xyz 6 points 6 days ago* (last edited 6 days ago)
[–] antimidas@sopuli.xyz 1 points 2 weeks ago

You select the active hob and set the desired timer. Usually the timer is either limited to a single hob at once, in more premium alternatives you might be able to set one for each hob individually.

I've only ever seen this in separate cooktops though, not in stoves.

[–] antimidas@sopuli.xyz 17 points 1 month ago (1 children)

The cursed Linux alternative of this is usually putting things directly in the home folder – I used to do this until I got better. Desktop is simple to keep clean when you don't have one in your "desktop environment" by default.

Some people who've used MacOs before OSX dump everything to the root filesystem out of habit. It works just as poorly as a file management strategy as one might expect, albeit better than putting everything on the desktop. Not sure how often that happens but I've known multiple people to do that.

[–] antimidas@sopuli.xyz 6 points 2 months ago* (last edited 2 months ago)

And this is because audiophiles don't understand why the audio master is 96 kHz or more often 192 kHz. You can actually easily hear the difference between 48, 96 and 192 kHz signals, but not in the way people usually think, and not after the audio has been recorded – because the main difference is latency when recording and editing. Digital sound processing works in terms of samples, and a certain amount of them have to be buffered to be able to transform the signal between time and frequency. The higher the sample rate, the shorter the buffer, and if there's one thing humans are good at hearing (relatively speaking) it's latency.

Digital instruments start being usable after 96 kHz as the latency with 256 samples buffered gets short enough that there's no distracting delay from key press to sound. 192 gives you more room to add effects and such to make the pipeline longer. Higher sample rate also makes changing frequencies, like bringing the pitch down, simpler as there's more to work with.

But after the editing is done, there's absolutely no reason to not cut the published recording to 48 or 44.1 kHz. Human ears can't hear the difference, and whatever equipment you're using will probably refuse to play anything higher than 25 kHz anyways, as e.g. the speaker coils aren't designed to let higher frequency signals through. It's not like visual information where equipment still can't match the dynamic range of the eye, and we're just starting to get to a pixel density where we can no longer see a difference between DPIs.

[–] antimidas@sopuli.xyz 12 points 2 months ago (1 children)

There's an overabundance of competent-ish frontend developers. You most likely need to pay the devs less, compared to someone writing it with e.g. C++, and finding people with relevant experience takes less time. You also get things like a ready-made sandbox and the ability to re-use UI components from other web services, which simplifies application development. So my guess is that this is done to save money.

Also, the more things are running in an embedded browser the more reasons M$ has to bake Edge into the OS, without raising eyebrows as to why they're providing it as a default (look it's a system tool as well, not just a browser).

[–] antimidas@sopuli.xyz 1 points 2 months ago

Per text and per minute plans were the norm at least here for a long time, I had one until mid 2010's IIRC. A single text cost something like 0.069 €. Parents kept their kids from overspending with prepaid plans, which were the norm for elementary students. In Europe people typically don't pay to receive calls, so your parents could still call you even if you ran out of phone credits.

We got unlimited data plans before widespread unlimited texting, which meant people mostly stopped texting by early 2010's. I remember my phone plan getting unlimited 3g in 2010 for 0.99 €/month (approx 1.40 $ back then), albeit slow AF (256 kbps). Most switched to e.g. Kik or later WhatsApp after that.

[–] antimidas@sopuli.xyz 1 points 2 months ago (2 children)

Probably varies a lot based on where you grew up. I got my first phone when I was 9, in 2006, and was among the last in my class to get one. Though phone plans were really cheap by then in Finland, partially due to the largest phone manufacturer (back then) Nokia being Finnish, and our telecom operators being in tight competition. (We've three separate carriers with country wide networks, as was the case back in the early 2000's as well)

I'd say the turning point here was 2003 when Nokia launched the model 1100, which was dirt cheap. I vaguely remember the price eventually falling as low as 19 € in a sale, at which point the phone cost about the same as your typical phone plan per month.

[–] antimidas@sopuli.xyz 2 points 2 months ago

Yep, that's a bit of a sketchy thing, and probably indeed has to do with marketing and getting more funding. Overhyping their quantum stuff might also have something to do with them trying to hide the poor image of their latest AI "achievements".

But I'm mainly worried all these companies crying wolf will cause people in relevant fields to push back on implementing quantum-proof encryption – multiple companies are making considerable progress with quantum computing and it's not a threat to be ignored.

[–] antimidas@sopuli.xyz 6 points 2 months ago (2 children)

There's still noticeable incremental progress, and since liboqs is out now, and the first somewhat quantum-proof algorithms are out with working initial implementations, I see no reason why you wouldn't want to move to a hybrid solution for now, just in case. Especially with more sensitive data like communication, healthcare and banking.

Just encapsulate the current asymmetric stuff with oqs, e.g. ed25519 inside LM-KEM. That way you'll have an added layer of security on top of the oqs implementation just in case there are growing pains, and due to the library not yet passing audits and as it's yet to be fully peer-reviewed.

Cryptography has to be unbreakable for multiple decades, and the added headroom is a small price to pay for future security. Health data e.g. can have an impact on a person even 30 years later, so we have a responsibility to ensure this data can't be accessed without authorization even that far in the future. No one can guarantee it'll not be possible, but we should at least make our best effort to achieve that.

Have we really not gotten past shooting ourselves in the foot collectively with poor security planning, even AWS was allowing SHA-1 signatures for authentication as recently as 2014, over a decade after it was deemed to be insecure. Considering how poorly people do key management it's feasible to expect there are old AWS-style requests with still working keys to be brute-forced out.

No, we don't have working quantum computers that threaten encryption now. Yes, it is indeed feasible this technology matures in the next 30 years, and that's the assumption we need to work with.

[–] antimidas@sopuli.xyz 3 points 2 months ago* (last edited 2 months ago)

Not sure about others in fennoscandia, but at least Finland has multiple large co-ops. One of the largest banks, OP ( literally named co-op bank) is a co-op which many own a part of. Many of my friends are part of the co-op.

Also, Finland's largest retail conglomerate (with 48.3 % market share of retail in Finland) is a consumer co-op, which is also causing a very difficult situation for all other businesses in retail, as they're able to undercut practically everyone since they have less of a profit incentive. 2.4 million people have a membership, which is quite a sizable amount in a country of under 6 million (though I'm not sure if the number includes Estonians as well)

 

Turns out it was just some sham poo

[–] antimidas@sopuli.xyz 5 points 4 months ago

Yep – it's actually possible IRL. Unfortunately real life is a bit more boring than the Simpsons, and the tomatoes themselves will only have trace amounts of nicotine in them. Most of it will be in the leaves like in the tobacco plant.

[–] antimidas@sopuli.xyz 8 points 4 months ago (3 children)

Also tobacco and chilis. Nightshades got range.

So many different plants being genetically close (ish) makes it possible to do all kinds of fun experiments with grafting. Something like tomatoes and eggplants that have nicotine (though still mostly in the leaves) or potatoes that also make habanero peppers.

 

'cause he was UN-professional

 

Housing is something people need, and is similarly a necessity like food or electricity. It needs a lot of money to keep in a livable shape, plus constant attention, and will lose its value if just left in place. As such it's not an investment, unless the market isn't working like it's supposed to.

When there was the long period of "low inflation" after the 2008 housing crisis, it's because we didn't consider housing prices a part of the inflation – if housing getting more expensive would've been taken into account we should've never had such a long period of low interest rates. If rents going up is inflation, appreciation should be as well.

As such, housing getting more expensive should be considered a bad thing, as it leads people to mistakenly see it as an investment. People will then "protect" their investment by trying to prevent new projects etc. Nobody would get angry if bread was cheaper the next day, just because they already bought it yesterday.

EDIT: apparently I've been a bit misinformed. I'm not from the US, but EU (Finland) and have understood that our indices don't really include owner-occupied housing in the calculation, but only the direct costs like energy and rent with some weight – which was at least partly the case, but there would seem to be some changes coming. Thanks for the enlightening replies, I'll have to read a bit more into it.

view more: next ›