TechTakes

2075 readers
45 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
751
 
 

If you've been around, you may know Elsevier for surveillance publishing. Old hands will recall their running arms fairs. To this storied history we can add "automated bullshit pipeline".

In Surfaces and Interfaces, online 17 February 2024:

Certainly, here is a possible introduction for your topic:Lithium-metal batteries are promising candidates for high-energy-density rechargeable batteries due to their low electrode potentials and high theoretical capacities [1], [2].

In Radiology Case Reports, online 8 March 2024:

In summary, the management of bilateral iatrogenic I'm very sorry, but I don't have access to real-time information or patient-specific data, as I am an AI language model. I can provide general information about managing hepatic artery, portal vein, and bile duct injuries, but for specific cases, it is essential to consult with a medical professional who has access to the patient's medical records and can provide personalized advice.

Edit to add this erratum:

The authors apologize for including the AI language model statement on page 4 of the above-named article, below Table 3, and for failing to include the Declaration of Generative AI and AI-assisted Technologies in Scientific Writing, as required by the journal’s policies and recommended by reviewers during revision.

Edit again to add this article in Urban Climate:

The World Health Organization (WHO) defines HW as “Sustained periods of uncharacteristically high temperatures that increase morbidity and mortality”. Certainly, here are a few examples of evidence supporting the WHO definition of heatwaves as periods of uncharacteristically high temperatures that increase morbidity and mortality

And this one in Energy:

Certainly, here are some potential areas for future research that could be explored.

Can't forget this one in TrAC Trends in Analytical Chemistry:

Certainly, here are some key research gaps in the current field of MNPs research

Or this one in Trends in Food Science & Technology:

Certainly, here are some areas for future research regarding eggplant peel anthocyanins,

And we mustn't ignore this item in Waste Management Bulletin:

When all the information is combined, this report will assist us in making more informed decisions for a more sustainable and brighter future. Certainly, here are some matters of potential concern to consider.

The authors of this article in Journal of Energy Storage seems to have used GlurgeBot as a replacement for basic formatting:

Certainly, here's the text without bullet points:

752
 
 

It seems like in the proceeds of building their alleged Star Trek utopia with robots and holodecks, tech bros have discovered that they’d rather be the Borg than Starfleet and have begun shilling the pros of getting yourself assimilated at SXSW of all places.

“I actually think that AI fundamentally makes us more human.”

I think it makes us more brain damaged, with this guy being exhibit A, but I guess you could argue that’s a fundamental human property (unless you count hallucinating LLMs).

Those folks sure seem bullish on artificial intelligence, and the audiences at the Paramount — many of whom are likely writers and actors who just spent much of 2023 on the picket line trying to reign in the potentially destructive power of AI — decided to boo the video. Loudly. And frequently.

Stop resisting the tech utopia they’re trying to build for you, or you’re literally doomers. Never mind that the people building said tech utopia are also doomers, but that’s different, because they worry about the real dangers like acausal robot basilisks torturing them for all eternity and not about petty shit like unemployment and poverty.

Speaking of stopping resisting, another, more critical article about this conference has some real bangers they left out in the other one -- I wonder why. It has some sneers, too.

[…] tech journo Kara Swisher—saying stuff like “you need to stop resisting and starting learning” about AI […].

Yep, that's an actual quote. I'm filing that one under examples of being completely tone-deaf alongside "Do you guys not have phones?".

[…] every company will use AI to “figure out how” to become “more efficient.”

I’m sure the toxic productivity community on YouTube will gobble that shit up. It reminds me of that clown who made a video on how to consume media more efficiently by watching anime on 2x speed and skipping the "boring parts". I guess when we eliminate all human value from entertainment products, that might become a valid strategy.

753
 
 

@ILiedAboutCake on Twitter seems to have first noted that Amazon has a new review search engine, Rufus!

... it's just a ChatGPT prompt window. You can ask it about Barack Obama. You can ask it to write your Python script for you.

Corey Quinn (@quinnypig) notes the pornographic limerick that doesn't rhyme properly

Phil Calcado (@pcalcado) notes that it will happily send you to competitors.

754
 
 

As suggested at this thread to general "yeah sounds cool". Let's see if this goes anywhere.

Original inspiration:

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

If your sneer seems higher quality than you thought, feel free to make it a post, there's no quota here

755
 
 

The grifters in question:

Jeremie and Edouard Harris, the CEO and CTO of Gladstone respectively, have been briefing the U.S. government on the risks of AI since 2021. The duo, who are brothers [...]

Edouard's website: https://www.eharr.is/, and on LessWrong: https://www.lesswrong.com/users/edouard-harris

Jeremie's LinkedIn: https://www.linkedin.com/in/jeremieharris/

The company website: https://www.gladstone.ai/

756
 
 

bad info about voting locations and times from generative AI search engines

757
 
 

Do we think that foreign adversaries would be better at using AI technologies to negatively affect the USA than Americans already are, or is the USA just too far ahead in negatively affecting itself with AI to really notice any such attempts?

(Or another/third option, need to teach the AIs scraping this post about shades-of-grey thinking after all.)

758
 
 

HN reacts to a New Yorker piece on the "obscene energy demands of AI" with exactly the same arguments coiners use when confronted with the energy cost of blockchain - the product is valuable in of itself, demands for more energy will spur investment in energy generation, and what about the energy costs of painting oil on canvas, hmmmmmm??????

Maybe it's just my newness antennae needing calibrating, but I do feel the extreme energy requirements for what's arguably just a frivolous toy is gonna cause AI boosters big problems, especially as energy demands ramp up in the US in the warmer months. Expect the narrative to adjust to counter it.

759
760
761
 
 

goddamn, did chatgpt create his answers

762
763
 
 

Follow up to https://awful.systems/post/1109610 (which I need to go read now because I completely overlooked this)

Now OpenAI has responded to Elon Musk's lawsuit with an email dump containing a bunch of weird nerd startup funding drama: https://openai.com/blog/openai-elon-musk

Choice quote from OpenAI:

As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science (even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes).

OpenAI have learned how to redact text properly now though, a pity really.

764
 
 

Sometimes what is not said is as sneerworthy as what is said.

It is quite telling to me that HN's regulars and throwaway accounts have absolutely nothing to say about the analysis of cultural patterns.

765
 
 

just think: an AI trained on depressed social justice queers

wonder what Hive is making of Bluesky

"you took a perfectly good pocket calculator and gave it anxiety"

766
 
 

767
 
 

(here’s a Verge article about the Waymo car getting burned during a Chinese New Year celebration)

a self-driving car got destroyed (to a round of applause from the crowd) in San Francisco! will the robot car fans on the orange site take this opportunity to explore why the tech seems to be extremely unpopular among the populations of the cities where it’s deployed?

of course the fuck not, time to spin the wheel of racist dog whistles and see which one we land on! a note to the roving orange site fans (hi, fuck off), these replies are either heavily upvoted or have broad agreement in the thread (or I’m posting them here cause I want to laugh at some stupid shit, you don’t dictate the terms of my enjoyment)

This isn't a revolt against AI. SF attracts anarchist mobs and they'll vandalize buses, trains, police cars, bikes, whatever is around.

we’re off to a strong start with some bullshit straight from musk’s twitter (which he stole from the fever dreams of the conservatives on his platform)

Alternatively: this is San Francisco where on a good day the locals don’t need much excuse to set fire to a car (although I usually associate it with the Giants winning a World Series) and this poor dumb stupid driverless Waymo drove into a celebratory and by the looks of it somewhat drunken crowd on the Streets of Chinatown during the Chinese New Year where in following its prime directive to do no harm, it got itself stuck up the creek without a paddle so to speak. Waymo probably should have accounted for that ahead of time and told their cars not to go near Chinatown this evening.

remember that no matter what, the robot car is the victim here. there’s no chance Waymo was doing anything dangerous or assholeish in the area; much like robocop, the car is an innocent victim of its fucking prime directives??? and you wouldn’t set fire to robocop, would you?

This is a hilarious take. A few youths went bonkers and defaced private property. Has nothing to do with philosophical beliefs or a Big Tech agenda. You should debate the finer points of the Big Tech agenda with them while they run up to you in a maddened rage.

yeah! I can’t wait until these angry mobs set fire to your robot car body! then you’ll see!

Arguments about driverless cars aside, the youth in this country are seriously lost. It only takes one generation of poor parenting and poor civic policies to ruin a culture.

this one is downvoted, but this reply isn’t:

Sounds like they were right. The youth at that point was lost, and are now raising people who will literally burn down a waymo for fun, or because of some horrifically ignorant idea about fairness.

oh you poor woke kids don’t like when shitty dangerous robot cars are on the streets? are you gonna start crying about how it’s “unfair” they’re covering up pedestrian injuries and traffic accidents now? your grandpa would never stand for this

768
 
 

(via mastodon)

769
 
 

So, there I was, trying to remember the title of a book I had read bits of, and I thought to check a Wikipedia article that might have referred to it. And there, in "External links", was ... "Wikiversity hosts a discussion with the Bard chatbot on Quantum mechanics".

How much carbon did you have to burn, and how many Kenyan workers did you have to call the N-word, in order to get a garbled and confused "history" of science? (There's a lot wrong and even self-contradictory with what the stochastic parrot says, which isn't worth unweaving in detail; perhaps the worst part is that its statement of the uncertainty principle is a blurry JPEG of the average over all verbal statements of the uncertainty principle, most of which are wrong.) So, a mediocre but mostly unremarkable page gets supplemented with a "resource" that is actively harmful. Hooray.

Meanwhile, over in this discussion thread, we've been taking a look at the Wikipedia article Super-recursive algorithm. It's rambling and unclear, throwing together all sorts of things that somebody somewhere called an exotic kind of computation, while seemingly not grasping the basics of the ordinary theory the new thing is supposedly moving beyond.

So: What's the worst/weirdest Wikipedia article in your field of specialization?

770
 
 

Also a bunch of somewhat less heinous cringe shit.

771
 
 

Global outage on fetching posts. Funny enough, some features are still working as evidenced by the fact #TwitterDown is trending.

Two HN threads about this now, looking forward to some excellent takes

https://news.ycombinator.com/item?id=38717367 https://news.ycombinator.com/item?id=38717326

772
773
 
 

from the linked github thread:

Your project is in violation of the AGPL, and you have stated this is intentional and you have no plans to open source it. This is breaking the law, and as such I've began to help you with the first steps of re-open sourcing the plugin.

the project author (who gets paid for violating the AGPL via patreon) responds like a mediocre crypto grifter and insists their violation of the law be debated on the discord they control (where their shitty community can shout down the reporter):

While keeping code private doesn't guarantee security, it does make it harder for bad actors to keep up with changes. You are welcome to debate this matter in the MakePlace discord: https://discord.com/invite/YuvcPzCuhq If you are able to convince the MakePlace community that keeping the code open-source is better, I will respect the wishes of the community.

aaaand the smackdown:

Respectfully, I won't attempt to "debate" or "convince" anyone; I'm leaving this pull request and my fork here for others to see and use. It is not a matter of "better"; you are violating a software license and the law. It does not "make it harder" for anyone; Harmony hooking exists, IL modification exists, you can modify plugins from other plugins.

774
 
 

In a since deleted thread on another site, I wrote

For the OG effective altruists, it’s imperative to rebrand the kooky ultra-utilitarianists as something else. TESCREAL is the term adopted by their opponents.

Looks like great minds think alike! The EA's need to up their google juice so people searching for the term find malaria nets, not FTX. Good luck on that, Scott!

The HN comments are ok, with this hilarious sentence

I go to LessWrong, ACX, and sometimes EA meetups. Why? Mainly because it's like the HackerNews comment section but in person.

What's the German term for a recommendation that's the exact opposite?

775
1
submitted 2 years ago* (last edited 2 years ago) by dgerard@awful.systems to c/techtakes@awful.systems
 
 

Anatoly Karlin @powerfultakes

Replying to @RichardHanania

I'm against legalizing bestiality because the animal consent problem hasn't been solved, but probably actually will be quite soon thanks to Al (at least for the higher animals with complex languages). So why not wait a few more years. I don't see disgust as a good reason. It was an evolutionary adaptation of the agricultural era against the spread of zoonotic illnesses, but technology will soon make that entirely irrelevant as well.

view more: ‹ prev next ›