• 0 Posts
  • 15 Comments
Joined 2 years ago
cake
Cake day: July 9th, 2023

help-circle


  • As mentioned in my post, in response to people falling for the naturalistic fallacy: “So what? Who gives a shit?”

    Whether it’s natural or not is simply the wrong metric by which to evaluate whether someone has a right to exist or be treated with dignity.

    It’s akin to someone saying to you after you’ve dyed your hair, “that’s not natural,” and then you scramble to insist that it is.

    The right response is: “So what? Who gives a shit?”

    Also: how do you read this and think I’m anything but an ally? I’m explicitly advocating for compassion, dignity, and equal rights for trans people. Pushing back on bad reasoning doesn’t contradict that; it strengthens it.

    If your definition of “ally” means I’m required to accept weak arguments without criticism, then you don’t want allies. You want sycophants. And I’m not signing up for that.

    I’m not interested in moral purity contests where allyship is contingent on uncritical agreement.


  • I’m going to be that guy, and no, this isn’t a gotcha. I’m a trans ally. I support the existence, rights, and dignity of trans people. But I’m allergic to lazy thinking; even from my own side.

    “Trans people are natural.” Cool sentiment. Terrible framing.

    First off, “natural” is a word people use when they’ve run out of real arguments. It’s vague, emotionally loaded, and epistemologically useless.

    Plenty of things are “natural”: cancer, infanticide, parasites, sexual coercion. Doesn’t make them desirable. Doesn’t make them moral. If you want to make a moral case for something, do it without the crutch of nature.

    Second, let’s talk about optics. When you say “trans people are natural,” you’re not helping. You’re feeding into the exact framework used against queer and trans people for decades; the idea that something has to be “natural” to be valid.

    Why are we reinforcing that standard? Why are we bending over backwards to find a species of fish that flips sexes and pretending that proves anything about human gender identity?

    Transgender identity is not “natural” in the biological sense. There’s no mammalian precedent for someone born male socially transitioning to live as female with a nuanced internal experience of gender. That’s not how “natural” animal behavior works. But so what? Who gives a shit?

    Being trans is a human phenomenon; emergent from consciousness, culture, language, and self-reflection. You know, all the “unnatural” stuff that makes humans interesting. The wheel isn’t natural. The internet isn’t natural. Civil rights aren’t natural.

    Trans people don’t need to be validated by nature. They need to be validated by ethics. By compassion. By rational moral reasoning.

    So let’s stop appealing to nature. It’s weak, it’s misleading, and it sets the movement back by anchoring it to bad philosophy.


  • You’re not actually disagreeing with me, you’re just restating that the process is fallible. No argument there. All reasoning models are fallible, including humans. The difference is, LLMs are consistently fallible, in ways that can be measured, improved, and debugged (unlike humans, who are wildly inconsistent, emotionally reactive, and prone to motivated reasoning).

    Also, the fact that LLMs are “trained on tools like logic and discourse” isn’t a weakness. That’s how any system, including humans, learns to reason. We don’t emerge from the womb with innate logic, we absorb it from language, culture, and experience. You’re applying a double standard: fallibility invalidates the LLM, but not the human brain? Come on.

    And your appeal to “fuck around and find out” isn’t a disqualifier; it’s an opportunity. LLMs already assist in experiment design, hypothesis testing, and even simulating edge cases. They don’t run the scientific method independently (yet), but they absolutely enhance it.

    So again: no one’s saying LLMs are perfect. The claim is they’re useful in evaluating truth claims, often more so than unaided human intuition. The fact that you’ve encountered hallucinations doesn’t negate that - it just proves the tool has limits, like every tool. The difference is, this one keeps getting better.

    Edit: I’m not describing a “reasoning model” layered on top of an LLM. I’m describing what a large language model is and does at its core. Reasoning emerges from the statistical training on language patterns. It’s not a separate tool it uses, and it’s not “trained on logic and discourse” as external modules. Logic and discourse are simply part of the training data; meaning they’re embedded into the weights through gradient descent, not bolted on as tools.


  • No, I’m specifically describing what an LLM is. It’s a statistical model trained on token sequences to generate contextually appropriate outputs. That’s not “tools it uses", that is the model. When I said it pattern-matches reasoning and identifies contradictions, I wasn’t talking about external plug-ins or retrieval tools, I meant the LLM’s own internal learned representation of language, logic, and discourse.

    You’re drawing a false distinction. When GPT flags contradictions, weighs claims, or mirrors structured reasoning, it’s not outsourcing that to some other tool, it’s doing what it was trained to do. It doesn’t need to understand truth like a human to model the structure of truthful argumentation, especially if the prompt constrains it toward epistemic rigor.

    Now, if you’re talking about things like code execution, search, or retrieval-augmented generation, then sure, those are tools it can use. But none of that was part of my argument. The ability to track coherence, cite counterexamples, or spot logical fallacies is all within the base LLM. That’s just weights and training.

    So unless your point is that LLMs aren’t humans, which is obvious and irrelevant, all you’ve done is attack your own straw man.


  • I do understand what an LLM is. It’s a probabilistic model trained on massive corpora to predict the most likely next token given a context window. I know it’s not sentient and doesn’t “think,” and doesn’t have beliefs. That’s not in dispute.

    But none of that disqualifies it from being useful in evaluating truth claims. Evaluating truth isn’t about thinking in the human sense, it’s about pattern-matching valid reasoning, sourcing relevant evidence, and identifying contradictions or unsupported claims. LLMs do that very well, especially when prompted properly.

    Your insistence that this is “dangerous naïveté” confuses two very different things: trusting an LLM blindly, versus leveraging it with informed oversight. I’m not saying GPT magically knows truth, I’m saying it can be used as a tool in a truth-seeking process, just like search engines, logic textbooks, or scientific journals. None of those are conscious either, yet we use them to get closer to truth.

    You’re worried about misuse, and so am I. But claiming the tool is inherently useless because it lacks consciousness is like saying microscopes can’t discover bacteria because they don’t know what they’re looking at.

    So again: if you believe GPT is inherently incapable of aiding in truth evaluation, the burden’s on you to propose a more effective tool that’s publicly accessible, scalable, and consistent. I’ll wait.


  • Right now, the capabilities of LLM’s are the worst they’ll ever be. It could literally be tomorrow that someone drops and LLM that would be perfectly calibrated to evaluate truth claims. But right now, we’re at least 90% of the way there.

    The reason people fail to understand the untruths of AI is the same reason people hurt themselves with power tools, or use a calculator wrong.

    You don’t blame the tool, you blame the user. LLM’s are no different. You can prompt GPT to intentionally give you bad info, or lead it to give you bad info by posting increasingly deranged statements. If you stay coherent, well read and make an attempt at structuring arguments to the best of your ability, the pool of data GPT pulls from narrows enough to be more useful than anything else I know.

    I’m curious as to what you regard as a better tool for evaluating truth?

    Period.


  • What makes you think humans are better at evaluating truth? Most people can’t even define what they mean by “truth,” let alone apply epistemic rigor. Tweak it a little, and Gpt is more consistent and applies reasoning patterns that outperform the average human by miles.

    Epistemology isn’t some mystical art, it’s a structured method for assessing belief and justification, and large models approximate it surprisingly well. Sure it doesn’t “understand” truth in the human sense, but it does evaluate claims against internalized patterns of logic, evidence, and coherence based on a massive corpus of human discourse. That’s more than most people manage in a Facebook argument.

    So yes, it can evaluate truth. Not perfectly, but often better than the average person.






  • This is the reason I’ve deliberately customized GPT with the follow prompts:

    • User expects correction if words or phrases are used incorrectly.

    • Tell it straight—no sugar-coating.

    • Stay skeptical and question things.

    • Keep a forward-thinking mindset.

    • User values deep, rational argumentation.

    • Ensure reasoning is solid and well-supported.

    • User expects brutal honesty.

    • Challenge weak or harmful ideas directly, no holds barred.

    • User prefers directness.

    • Point out flaws and errors immediately, without hesitation.

    • User appreciates when assumptions are challenged.

    • If something lacks support, dig deeper and challenge it.

    I suggest copying these prompts into your own settings if you use GPT or other glorified chatbots.