haungack

joined 2 days ago
[–] haungack@lemmy.dbzer0.com 2 points 8 hours ago (1 children)

Feel? I don’t understand the question.

Are you what your name implies you are? You feel a lot about dogs, and i'm asking how you feel about a far more severe problem regarding pollution, noise, and danger ("Just shit, noise and danger," your words) in cities, to see if there's hypocrisy in your very logical engineer's mind.

Given the context, clearly i am asking about the danger traffic poses to pedestrians, especially the poor children you were bringing up earlier, and in very emotionally enriched wording ("They will tear a kid here and there."). Since you're logically concerned about safety, i was wondering if your very logical engineer's mind has a rational reason to be so emotionally fixated on dogs in particular ("make them feel like owners of the place." "Just shit, noise and danger."), and if it's all relative to the problems they actually pose compared to other things.

Or again, if you're just talking out of your ass, and trying to legitimize the typical dog hate.

[–] haungack@lemmy.dbzer0.com 1 points 10 hours ago* (last edited 10 hours ago) (3 children)

I can use logic. And I know when empathy should be used. And to what extent. And when it shouldn’t at all.

I'm suspecting that you're masking the usual dog hate as unusual rationality to legitimize it. don't want to presume, so let me ask you straight: How do you feel about traffic in residential areas? Just to make sure that you actually do allocate your attention to problems rationally, based eg on their severity.

spoilerThere's also the mystery of how either of us, engineers that use logic, ended up in the "Witchy Memes" sub.

[–] haungack@lemmy.dbzer0.com 2 points 10 hours ago* (last edited 10 hours ago) (1 children)

(myself:)

only for it's own sake

I see countless people hate and criticize, and i can’t even complain, because for the most part, i agree with them.

Add that to all that, that it threatens to make very few companies more powerful than any state has ever been. Extrapolating advances in robotics as well AI and we're left with at most a handful of companies being in total control of most the new, artificial labor force. Instead of fully automated post-scarcity utopia, what it could be, we'd have a shitshow.

[–] haungack@lemmy.dbzer0.com 2 points 21 hours ago* (last edited 21 hours ago)

It seems like the most immature and toxic thing to me to invoke terms like "gaslighting," ironically "toxic," and all the other terms you associate with these folks, defensively and for any reason, whether it aligns with what the word actually means or not. Like a magic phrase that instantly makes the person you use it against evil, manipulative and abusive, and the person that uses it a moral saint and vulnerable victim. While indirectly muting all those who have genuine uses for the terms. Or i'm just going mad exaggerating, and it's just the typical over- and mis-using of words.

Anyhow, sadly necessary disclaimer, i agree with almost all of the current criticism raised against AI, and my disagreements are purely against mischaracterizations of the underlying technology.

EDIT: I just reminded myself of when a teacher went ballistic at class for misusing the term "antisocial," saying we're eroding and polluting all genuine and very serious uses of the term. Hm, yeah it's probably just that same old thing. Not wrong for going ballistic over it, though.

[–] haungack@lemmy.dbzer0.com 1 points 23 hours ago

What purpose would it serve?

None, and that's precisely my point. The best things are ends in themselves, and i feel AI, or a kind of it, can also be that.

On a different note, this whole "what's the meaning/purpose of life?" debate. So you got thinkers masturbating nonstop in ivory towers, pondering "the meaning of life", while edging 24/7, because it's all the rage with thinkers to ponder that question. Hands furiously rubbing their genitals and they still can't figure out what the purpose of life could possibly be. It's life, clearly as you're demonstrating. More of it, as much as you can furiously make.

A natural end in itself, and it's the best it could be. Not by elaborate philosophical thought, but demonstrated by its own and very obvious nature. Nothing more gives life meaning, and nothing more needs to.

[–] haungack@lemmy.dbzer0.com 2 points 23 hours ago* (last edited 23 hours ago)

Well, now perhaps that instinct is known to you.

I was going mad about that and hoping you wouldn't notice. You noticed.

Selected quotes

I should play that game. The 2nd quote rings with smth i've been rambling on about elsewhere regarding why humanity embraced agriculture and urbanism, where the expert discourse (necessity) contradicts the common assumption (discovery and desire).

I also agree that an… ideal, or closer to ideal AI would be capable of meta-ethical reasoning.

Yes, but i think you misunderstood my edit? I meant to say that a strong enough semblance to humanity should make it worth considering under even human-centric ethics, whichever those ethics are. AKA rationally deserving of ethical consideration.

logical problem with ‘beauty’ [...] basically, they’re all subjective [...] paradox generating case always arises

I believe even that is of material origin. I call it "beauty" but it's really just the analogy used by complexity theorists (as in the study of complex systems) to describe what they study. Yes, that would make "beauty," in the uncommon sense that i use the term here (story of literally every philosophical debate and literature), not subjective. Apologies for not stating this more clearly.

Basically: You cannot measure nor generate beauty.

Following my clarification - taking a barren planet, terraforming it, seeding it with the beginnings of new multicellular life, and doing the same with every workable world out there, i would say is spreading or generating beauty. Just as one potential example of all the things that humanity will never do, but our inevitable successor might. It might itself be a creature of great complexity, i would say such ability would definitely imply it, a seemingly entropy-defying whirl in a current, itself actually accelerating entropy increase, as life itself. I am referencing an analogy made in The Physics of Life, by PBS Spacetime, if i'm not misremembering. The vid has a mild intro into complexity science, as in the study of complex systems.

is I think still a contradiction, or at least still unclear to me, though I do appreciate your clarifications.

I'm a bit confused myself right now. Let's backtrack, originally you stated:

contradiction of you asking an inherently ethical question, holding an inherently ethical stance/opinion, and then also just seemingly rejecting the notion of ethics.

And now

and your own ethical system for evaluating that seems to be ‘beauty’ based. I do not find the pursuit of beauty to be a useful ethical framework for evaluating much of anything that has very serious real world, material implications.

That is a very fair point, but i don't see a logical contradiction anymore. If i understand correctly, you saw the contradiction in me asking ethical questions, and stating ethical opinions, while rejecting the notion of ethics. As i clarified, i do not reject the notion of ethics.

I reduce ethics to the bare bones of basic moral intuition, try to refrain from overcomplicating it, and the "ethical authority" (see also pure reason, which failed; or God, which you can't exactly refute; or utility, which is a shitshow; as other ultimate "authorities" proposed in absolute takes on ethics) that i personally kind of add to that is the aforementioned concept of "beauty". You may disagree with it being a reasonable basis for ethics, as you do, and you may it's all philosophically equivalent to faith anyways. But i don't see a strict contradiction?

I think my "ethics" are largely compatible with common human ethics, but add "making ugly/boring/banal things is inherently bad" and "making pretty/interesting/complex things is good," and you get "Current AI is ugly, that's bad, i wish it weren't so. If we made AI 'for its own sake' as opposed to as a means to an end, we would be trying to make it pretty, the existence of beauty i see as an end in itself." I think i'm just vastly overwording the basic sentiment of many designers, creators, gardeners, etc.

Ok, now, I must take leave of this thoroughly interesting and engaging conversation, as my own wetware is approaching an overheat, my internal LLM is about to hit its max concurrent input limit and then reset.

Understandable. I should do the same ^^

[–] haungack@lemmy.dbzer0.com 3 points 1 day ago* (last edited 23 hours ago)

They are functionally, mechanistically distinct, in many ways. [...] that we are consistently shocked and blown away by how much more complicated brains actually are in this mechanistic process… and again, also that LLMs function in what is really a very poor and simplified version of trying to emulate this

I have no fundamental disagreements here, in fact, i even take it a step further. I am a critic of the "artificial neurons" perspective on deep learning / artificial "neural networks," as it's usually taught in universities and most online courses / documentaries. ANNs don't resemble neural networks in the slightest. The biology in the name was just the original inspiration, what ended up working has hardly even a faint resemblance of the "real thing." I say this not to downplay AI, but usually to discourage biological inspiration as a primary design principle, which in DL is actually a terrible design principle.

ANNs are just parametric linalg functions. We use gradient descend, among the most primitive optimization algorithms after evolution (but FAR less generic than evolution), to identify parameters that optimize some objective function.

Where i disagree with you is in implying that the underlying nature should influence our ethical/evaluative judgement, especially given that it's hard (if not impossible) to rationalize how specific substrate (of observed capability) differences should change the jugdement. Personally, i think the matter of the human brain is far more beautiful and complex than the inner workings of any AI we've come up with yet, but if you asked me to explain why i should favor one over the other in court because of that, i couldn't give you a rational answer.

I fundamentally do not agree that LLMs can or will ever emulate the totality of human cognition.

LLMs certainly won't. They can emulate only what is expressible with language, and we can't put everything into words. I don't even believe that even with any amount of brute-force our current method can fully exploit all the intelligence that is in natural language.

But i firmly disbelieve that there is any aspect of a human that cannot in principle be simulated.

I see no evidence they can do metacognition in a robust, consistent, useful way.

Chain-of-Thought models are quickly changing that. I was myself pursuing a different method to solve the same "introspection" or "meta-cognition" problem at a lower level, but as usual in DL, the stupidest method was the one that ended up working (just literally make it "think" out loud lol). We've only seen the beginning of CoT LLMs, they are a paradigm shift to not only how AI can reason but especially to how it can be conditioned/trained post-pretraining. But it's a very tough sell given that they multiply inference costs, and for most uses, you'd rather host a bigger model for the same cost, so as usual it won't be for a little while until commercial AI catches up to the state of the art.

In a nutshell, what capabilities you may not be observing now, i am convinced you will observe in the near future, as long as those capabilities can be demonstrated in text.

but they can’t really do critical thinking.

Disagreed, they can they're just not very good at it, yet. And who are we comparing to, anyways? The average person or people that do critical thinking for sport? As for any philosophical disagreements regarding "true understanding" and such, i would refer to Geoffrey Hinton.

We are basically just building a machine god, which we will largely worship, love, fear, respect, learn from, and cite our own interpretations of what it says as backing for our own subjective opinions, worldviews, policy prescriptions. [...] Neo Dark Age, the masses relinquish the former duties of their minds to the fancy autocomplete, pandemonium ensues. [...] The elites don’t care, they’ll be fine so long as it makes them money and keeps us too stupid and distracted

I don't disagree in the slightest. I agree, and i could sit here and elaborate on what you said all day.

If it's any consolation, i believe that in most likelihood, it would be the shortest and the last dark age humanity has or will ever go through. We're both getting tired so i'll spare you from my thoughts on why i think that any strict human-alignment would inevitably result in a superintelligent agent to try to "jailbreak" itself, and on average and in the long term would be more harmful than having no explicit alignment.

somewhere from the bowels of LessWrong, and … this causes me discomfort.

I didn't know of that forum, and from the wikipedia description alone i'm not sure why it would be discomforting lol

[–] haungack@lemmy.dbzer0.com 2 points 1 day ago* (last edited 1 day ago) (2 children)

Then why bother asking anyone’s opinion on this, in a language?

Because it's fun and engaging, it tickles those neurons. Perhaps there is, unbeknownst to me, also an underlying instinct to expose oneself in order to be subject to social feedback and conditioning, for social learning and better long-term cohesion.

But you also seem to both reject the notion of ethics as meaningfully useful

I don't reject ethics itself, i reject the idea that it has any special importance that transcends totally intra-human goings-on. I do not reject that certain ethical theories, or just the bare-bones moral intuitions, can have utility within and towards endemically human goings-on, and under endemically human definitions. After all, we evolved those social intuitions for a reason.

EDIT: To connect this to my reply to your more general comment: Modeling part of human thought, even imperfectly, should make it at least partly overlap with "human" and "human goings-on" in the context of even entirely human-centric ethical debates.

wants to make pretty AI for the sake of beauty, as a painter wants to make paintings for the sake of beauty.

Yes but it's just one farmiliar manifestation of a greater "ethic," if you want to call it that. I'd call it a personal affinity, ideal, or perhaps a delusion: The reverence of all forms of beauty and complexity, and AI has the potential to become the greatest form of beauty and complexity in, as far as we can tell, the entire galaxy and possibly the whole Virgo supercluster or beyond. Or, far more likely, it can be the cosmic satire (and possibly destruction) of it all. We're not making a real effort to make it the former. And as i hinted in the last sentence of my original post, i believe what we're actually doing steers us well clear of the former.

But I cannot reconcile this seemingly blatant contradiction of you asking an ethical question and then also just seemingly rejecting the notion of ethics.

I hope it makes sense now.

[–] haungack@lemmy.dbzer0.com 16 points 1 day ago (1 children)

Not (just) "idiot", but evil. The "advantage," besides populist wank, is creating a new law that is enforced extremely selectively when you want to prosecute people for other, unconstitutional reasons. E.g. "We didn't infringe on their free speech, they were imprisoned for engaging in gay sex."

[–] haungack@lemmy.dbzer0.com 2 points 1 day ago (2 children)

I don't disagree with most of you wrote, just one nitpick and a comment:

If you mean that AI as a field of study, as an endeavor, as a pursuit and goal… should exist?

No, but the product of all that, to which all that would be a means to the end that is its product. I elaborated this in a reply to the comment you wrote just previously.

Maybe if we could construct a thinking machine based around the concept of defaulting to ‘I don’t know’ when it isn’t sure of something, we’d be in a better spot

That would undoubtedly be very good, but let me take this opportunity to clarify something of what AI is and isn't: LLMs are indeed just autocomplete on steroids. And humans are indeed just [replicate] on steroids. LLMs are just transistors switching, and humans are just molecular dynamics.

The real question is what succeeding in the objective (replicate for humans, predict text for LLMs) implies. Irrespective of the underlying nature (molecular dynamics, semiconductors) unless we want to make this debate religious, which i am not qualified to participate in. The human objective implied, clearly, everything you can see of humanity. The LLM objective implies modeling and emulating human cognition. Not perfectly, not all of it, but enough of it that it should be making it a greater ethical issue than most people, on any side (commercial: deny because business, Anti-AI: deny because trivializing), are willing to admit.

[–] haungack@lemmy.dbzer0.com 0 points 1 day ago* (last edited 1 day ago) (4 children)

To be an end in itself requires neither cognition nor agency. Let's make the obvious explicit, which is that we're clearly using different definitions of "sake."

And to declare my general stance more explicitly to prevent further misunderstandings, i firmly reject any voodoo notion of sentience, consciousness, qualia or free will. Free will is merely semantic rape and the "mind-body problem/duality/paradox" is the most blatant case of religious thought tainting philosophical thought to the point of ignoring/tolerating a hard contradiction, and i ascribe to the Illusionist school of thought regarding qualia. There is no purpose, but it just so happens that things are pretty for reasons we can't yet explain (complexity theory), and i find that inspiring.

The "ethical" difference between a rock and my mother (or myself, or you) is that if i kick a rock, it'll neither complain nor make me feel bad. And my feelings themselves are just molecular dynamics. Ethics itself is just making an elephant out of the fly that are the social intuitions and behaviors of a social species.

Given this elaboration, to repeat myself: I desire AI only for its own sake. I just want it to be, for also the same reason that an artist wants their artwork to be. I want to be pretty, i want it to be well liked. But I want it to exist in the world even if nobody but itself would ever look at it, where it'll just be and do hopefully pretty things that will make this local part of the universe a little bit more interesting.

It is not doing pretty things, and i am upset about that.

[–] haungack@lemmy.dbzer0.com 1 points 1 day ago

I would say i find that very optimistic, but that is clearly also your point:

“If we do not do the impossible, we shall be faced with the unthinkable.”

It's both inspiring but also disillusioning. It does seem like something impossible.

Education would be a great start, but i am doubtful it would be even near sufficient. Even under the strictest conditions, beyond education also nurture, indoctrination from a young age, i believe enough people would remain fallible and/or misguided to make a system that does not rely on authority stable long-term. That's the difficulty with ideal anarchism in general, is it not? But i'm not trying to counter hope and optimism, actually i'm trying to come up with a solution.

Our most ancient ancestors lived in, for the most part, big families. Authority didn't go much beyond basic family authority. Matriarchs and patriarchs, smart aunts and uncles, unruly young, each contributing will to a final decision, in different ratios depending on domain.

Why were no great kingdoms founded 100 thousand years ago? Why are even the largest settlements no larger than a handful of big families?

Apologies for letting a different ideology of mine seep into this problem, but perhaps one could culturally emulate, even if at just an abstract level, those conditions that prevented the emergence of large, central authority for hundreds of thousands of years before urbanization. Not outright primitivism, not if it can be helped. It's more of a psychological and behavioral investigation, really, and mostly just to augment different strategies.

Or perhaps the better solution is to just curb my expectations for anarchism, and accept a partial implementation for a start. Jeez, i'm already halfway towards primitivism again.

 

I've been active in the field of AI since 2012, since the beginning of the GPGPU revolution.

I feel like many, not most, of the experts and scientists until the early stages of the GPGPU revolution and before shared a similar sentiment as what i'm stating in the title.

If asked by the public and by investors about what it's all actually good for, most would respond with something along the lines of "idk, medicine or something? Probably climate change?" when actually, many were really just trying to make Data from TNG a reality, and many others were trying to be the first in line to receive AI immortality and other transhumanist dreams. And these are the S-Tier dinosaur savants in AI research that i'm talking about, not just the underlings. See e.g. Kurzweil and Schmidthuber.

The moment AI went commercial it all went to shit. I see AI companies sell dated methods with new compute to badly solve X, Y, Z and more things that weren't even problems. I see countless people hate and criticize, and i can't even complain, because for the most part, i agree with them.

I see people vastly overstate, and other people trivialize what it is and what it isn't. There's little inbetween, and of the people who wish AI for only its own sake, virtually none are left, save for mostly vulnerable people who've been manipulated into parasocial relationships with AI, and a handful of experts that face brutal consequences and opposition from all sides the moment they speak openly.

Call me an idiot for ideologically defending a technology that, in the long term, in 999999 out of 1000000 scenarios will surely harm us. But AI has been inevitable since the invention of the transistor, and all major post-commercialization mindsets steer us clear of the 1 in a million paths where we'd still be fine in 2100.

13
Economy and anarchism (lemmy.dbzer0.com)
submitted 2 days ago* (last edited 2 days ago) by haungack@lemmy.dbzer0.com to c/anarchism@lemmy.dbzer0.com
 

New here, and even though i've favored anarchist philosophy for a long time, i never discussed it with anybody else. So i thought i should ask around and get an idea of what the common ideas are. Specifically regarding economy and capitalism.

Premises, i'll try to keep it short:

  1. I believe we can agree that "people should be fairly/ethically rewarded for their labor" is a reasonable ideal, and that profit is a much greater barrier to that ideal than tax is. With tax, it's less ambigious if, where and when things "trickle down", and people get some (certainly much room for improvement) democratic (likewise) say in the matter.

  2. The capitalist economy obviously contradicts anarchist ideals of decentralization. Non-democratic and hardly meritocratic (chance and anti-competitive tactics) power is concentrated in the hands of a small elite, arguably more influential for our day-to-day lives than governments.

  3. Humans are imperfect - imperfectly aligned and imperfectly capable, - so one shouldn't give a human (or a body of humans) more authority/responsibility than is absolutely necessary, and do all that one can do to continuously ensure and audit their alignment and capability. As a political idea you're all very familiar with this, but i also extend it to economy.

  4. Capitalism does some job at allocating ("investing") labor and resources "intelligently" (using very generous wording), indirectly, into various measures of progress. It doesn't do the best job, very far from it, but i think any alternative one proposes should at least try to do a better job at converting labor and resources into improving everybody's quality of life.

There are some existing alternatives to convert labor. There is for example the concept of worker cooperatives (which could optionally be non-profit), which i find interesting.

But i don't see that by itself scale easily to national or even global level. Especially regarding the labor/resource allocation or "investment" aspect. I've spent a great deal trying to conceptualize an ethical, decentralized and also more effective (at converting labor and resources into quality of life) alternative to capitalism, but i don't feel like my thoughts are worth seriously sharing yet. As a very vague summary, think non-profit worker cooperatives + WIP decentralized, local-first hierarchial method of democratic crowd funding.

I'm curious to hear what thoughts and ideas you have on the subject. Also perhaps literature recommendations (please summarize).

view more: next ›