If you love your mind and your values, don’t get drafted into the next Army of Cartmans.

As a matter of disclosure, I have not read An Army of Davids. The thesis is completely obvious from the title, and my expectation is that, to justify a $25 purchase price, what is at best a thousand-word essay is inflated by a factor of 100 with repetitive interviews from the echo-chamber. Feel free to correct me if you think I’m wrong about this, but life is too short for weblog posts masquerading as books.

But as we saw yesterday, the putative Army of Davids is actually a pathetic mob of self-panicking Eric Cartmans, dumb-ass bullies telling each other the same one dumb-ass joke, over and over again — just like they do at The Daily Kos and Little Green Footballs, which as Karl Marx reminds us is “no accident.”

That’s sad. This is sadder: Yesterday was Patriot’s Day — Lexington/Concord, Waco, Oklahoma City, the Warsaw Ghetto Uprising — the perfect day to talk about the evil of government and how to rid our lives of that evil. So what did Libertarian and Conservative “thought leaders” do instead? They shot spitballs — paper bullets of the brain — at Obama all day.

That’s a waste of their lives, but it’s also a waste of your intellectual capital. The day was lost, and the better points that could have been made were lost. But so much more than that was lost, as we’ll see. I scolded these fools yesterday, and I’m going to explore their errors in fuller detail today. But the important lesson for you, going forward, is to learn how not to squander your own time and character on vile nonsense.

Yesterday, I said this:

1. Saul Alinsky was evil, as are Jon Stewart, Bill Maher, etc. There is no benefit to a self-loving mind in emulating these vile, rhetorically invalid tactics. Breitbart was wrong: Tu quoque is not okay.

2. No matter how much you revile or ridicule your opponent, this will not make you a better person. Two wrongs don’t make a right.

This is me from Man Alive!:

The paths to error are infinite, but two landmarks I have learned to rely on, in listening to people trying to justify their evil actions, are the logical fallacies Tu Quoque and Two Wrongs Make A Right. Tu Quoque is Latin. It means, “You do it, too.” When you catch your teenager swiping a beer, the pre-fabricated rationale will surely be, “Well, you drink, why can’t I?!?” And you were probably very young when you first heard some little proto-brute justifying his vengeance by bellowing, “Well, he hit me first!” – ergo, two wrongs make a right. You should probably be on your guard against any statement that starts with a “well” and ends with an exclamation point. That particular verbal construction seems to fit very comfortably in the mouths of liars and thugs. But when you hear those two logical fallacies being deployed in tandem, what you are hearing, almost certainly, is a cunningly-crafted rationalization of an abominable injustice.

I said this yesterday, too:

Mobbing up is always self-destructive — for every member of the mob.

The means, mode and method of mobbing up are covered in huge depth in the book, so I’m not going to repeat those ideas here. It suffices to say that people only run in mobs when they know in advance that their behavior is morally reprehensible. As above, they will justify — rationalize — that bad behavior by deploying Tu Quoque and Two Wrongs Make A Right, but referencing Chapter 7, no one pre-fabricates rationales for virtue. Virtue can speak for itself, and true intellectual courage insists on standing fearlessly alone — like the real David.

Take a moment to ruminate, if you would. How would you rather see yourself: As a giggling, guffawing, grandstanding stooge of Eric Cartman, or as a man or woman standing all alone, on the authority of your own mind, for true human justice? Which is a better expression of self-adoration? Which is an expression of self-loathing? Which will leave you better able to defend your mind tomorrow — when the mob turns on you?

Which brings me to the final note from yesterday’s post:

Everything you do that does not advance your objectives retards them: 1 > 0 > –1.

That is what this weblog is all about.

The “logic” of mobbing up like this is absurd. Some people are susceptible to social pressure, but this is not a necessary consequence. I can show you how to withstand any quantity of empty scorn. How? By measuring it for conceptual content. Zero always equals zero, and even if a million zeros are wasting their own lives, they need not have any impact at all on yours. If you learn how to manage this kind of thing — and, alas, I do have some experience at that — you can turn the mob to you own advantage.

I got piled on a little yesterday at FreeRepublic.com, and this is part of a comment I wrote in response:

You are advancing your own interests in no way by piling on me. To the contrary, your behavior is self-destructive, and it hurts me not at all. That’s something for nothing, and every trade you make where you give up your values and get nothing in exchange is a waste of your one, unique, irreplaceable life. That just seems silly to me.

So the first premise behind this “thinking” is false: You cannot cause or change other people’s purposive behavior, no matter how hard you sneer.

The second premise is that taking someone down a peg — in this case President Obama — will be a good thing. But what are the means deployed? The Appeal to Ridicule — a logical fallacy — irrelevancy by massive redundancy. Everyone who already hates Obama snickered. Everyone who loves him scowled. And zero minds were changed, since persuasion can only be effected by valid, rational arguments.

The counter-claim would be that the voters are stupid, anyway, so the only way to move them is with stupidities. This is the way your “thought leaders” really “think,” and they will say so right out loud if you listen to what they actually say, instead of trying to emote in sympathetic synchronicity with their puerile public posturing. If we stipulate the claim, three things fall out:

1. There is nothing that is demonstrably better about the Conservative or Libertarian position. Politics is all just Evil versus Evil.

2. There is no point in making valid arguments about virtue and vice in any case.

3. There is some human value to be found in campaigning to be King of All Idiots.

Those propositions are all stupidly false, of course. And yet those must be the actual beliefs of the “thought leaders.” They would not behave as they do, making the arguments they make, if they had any intellectual confidence in the validity of their positions.

The third premise is a restatement of the fallacies Tu Quoque and Two Wrongs Make A Right: Collectivists are so good at ridiculing Conservatives and Libertarians, our only hope of victory is to get down into the mud with them.

Wow.

How stupid is that?

Now we go to the book:

But when you find yourself among philosophical bullies and their mindless minions, you need to be on your guard. If you are not vigilant, they will try to impose their moral standards on you, and you will find yourself striving – in vain – to defend your arguments, beliefs or behavior according to their putative standard of value. It does not matter that they can neither intellectually defend nor successfully live down to their perverse ethical doctrine. All that will matter to them is inducing you to damn your self on their terms – to apologize to them and to the universe for being a self and for daring to live up to your self. They crave this as a bogus “evidence” of the moral righteousness of their creed, an evidence they would not seek, and would not need to seek, if their dogma were actually true.

The general process – evil people seeking “evidence” of the “truth” of moral philosophies they already know are false – is much too common. The ganging-up on the playground – and in the forum and in the tap-room and in the office and on the internet – is a form of the same madness, social “proof” of claims no one doubts are factually false and morally reprehensible. True intellectual confidence is fearless. If you need for someone to tell you that you are in the right, it’s because you already know you are in the wrong.

Who didn’t see this coming?

What’s funnier is that I told the Libertarians, at least, to catch up with the times.

That’s all one. It doesn’t matter. The world is mine now, if only because no one else is even trying to do the right things.

I don’t care about them. I care about you. Yesterday is gone, and there is only one positive value to be harvested from the past: To learn from your mistakes so you can do better going forward. If you lost time to making and listening to dumb-ass jokes, too bad for you. I want to know what you plan to do that is good for you instead. Splendor is the reward you earn from pursuing your own values. Squalor is all you will reap, in the end, when you get swept up in some dumb-ass “thought leader’s” stupid, ineffectual, time-wasting, character-destroying games.

Can we revisit the questions I asked before?

How would you rather see yourself: As a giggling, guffawing, grandstanding stooge of Eric Cartman, or as a man or woman standing all alone, on the authority of your own mind, for true human justice? Which is a better expression of self-adoration? Which is an expression of self-loathing? Which will leave you better able to defend your mind tomorrow — when the mob turns on you?

Here is one final thought. Yesterday’s pandemic jackassery did not make the quest for human sovereignty easier, it made it harder. The collectivists will be newly energized — and newly empowered to regurgitate the Tu Quoque and Two Wrongs Make A Right fallacies. Tit-for-tat always escalates — and no one is making valid arguments against tyranny. Now think about the “thought leaders” you believed you could trust and depend on. Are they defending your interests, or are they selling you out for a few dumb-ass giggles?

Here’s my take: An employee who can be fired is one you’ll never miss. An employee who should be fired is one who makes your work harder, not easier. You do the math.

This entry was posted in Splendor!. Bookmark the permalink.
  • Jim Klein

    Damn, is this right! Enough already pretending that Obama brought us to where we are. Sure, he’s as rotten as anyone would charge, but so what? Saying that he brought us here is like saying that your neighborhood crack dealer caused a crack epidemic.

    Exactly like that, in fact.

  • http://splendorquest.com/ Greg Swann

    This is me in a a comment this morning on Facebook:

    http://www.facebook.com/permalink.php?story_fbid=385801611459837&id=726506052&notif_t=share_comment

    Echoing it here because I don’t want the idea I am emphasizing in bold below to get lost in the mists:

    I’m not sure what your point is, Dan. Tu Quoque is always a logical fallacy, and logical fallacies are always paths to error. Publicly promoting logical fallacies is demagoguery (tautologically), and demagoguery is necessarily always evil.

    > Nor am I aware of any element of tu quoque – or any defense whatsoever aside from satire – from the right on this matter.

    I linked to some of the worst offenders in the post at SelfAdoration.com. Folks reading here should click through — and then click through again from there. Linking is about credibility, and I have always been scrupulous about it.

    We are not talking about satire. We are talking about verbal gang-rape by internet — an outrageous group-scourging of a “goat” — the hate-object without which no mob can exist — as an expression of group cohesion.

    • Nathan Stocker

      The exhortations against mobbing up recalled to me some comments I made on reading http://www.rationaloptimist.com/blog/the-ancient-cloud.aspx, which besides making some decent points about how “crowd-sourcing”, “the cloud” are not so much completely new categories as they are amplifications and new forms of very old modes indeed. One line struck me as particularly unfortunate, however:

      “Human technological advancement depends not on individual intelligence but on collective idea sharing, and it has done so for tens of thousands of years.”

      This unfortunate formulation blanks on exactly what is possessing and creating the ideas whose collective sharing leads to technological advancement, even while naming that what in the immediately prior clause. It’s like he’s rejecting the ultimate cause in favor of only the proximate cause. What I’m saying, of course, is that it is individual intelligences that create, pass on, learn, dissect, refine, and hold the ideas which, collectively, generate technology and make civilization possible. It’s silly to say that human technological advancement does not depend on individual intelligence, since it’s silly to speak of collective idea sharing without individual intelligence. Like any blindly functional system, collective idea sharing (or an economic system, or a political system, or the advancement of technology and civilization) is not a function of the will of any particular individual intelligence. But it surely doesn’t function without the existence of a large number of individual intelligences.

      Another silly move often made near this conceptual space is to treat “collective idea sharing” as if it generated an individual mind. An adjunct here is that collaboration (collective idea sharing) is confused with the normalization of ideas across individuals; individual intelligences are not allowed to flourish or function as such and so the very fount of collective idea sharing is shut off.

      • http://splendorquest.com/ Greg Swann

        > “Human technological advancement depends not on individual intelligence but on collective idea sharing, and it has done so for tens of thousands of years.”

        I like your read better. Any knowledge is particular to an individual. New knowledge must originate within an original mind. No other means is possible. Ridley is conflating aggregations of events with the unique events making up that aggregate, the Fallacy of Composition. This particular species of that fallacy should be called the Monkey-See/Monkey-Do Fallacy, FWIW, since most of the behaviors he is saluting consist not of anything resembling true creativity, but simply of a mimicry of original thinking the copyist did not truly understand. This is the behavior we expect to see from any sort of collective- or committee-based “creativity.” All true human creativity emerges fully-formed and fully-armed from some one Zeus, and is typically rejected at first by the mob.

        The anthropological argument he is making would seem to be the evolution of communicable Fathertongue — speech. If you can communicate ideas only by demonstration, your circle of trust will be very small. But if you can negotiate — negotior in Latin, trade — with strangers, the world is yours.

        Am I reading him right? Is he a Cato-style collectivist “capitalist”?

        • Nathan Stocker

          > New knowledge must originate within an original mind.

          And old knowledge must spark anew in each mind that acquires it. That’s true even for copy-cats copying. If that could count as some tiny kernel of “creation”, there, heh. I’m not sure precisely what Ridley’s saluting – I think there’s room for taking him to praise not *simply* mimicry, but incremental iterations by +1 not-quite-copy-cats.

          Yes, the transmittal of ideas is key, and language, key to that (and to their origination, for that matter). And particularly writing, yes? The backwards-moving Tasmanians of 10000 years ago must not have had writing, else it seems they would have tread water at the least, I’d guess.

          > Am I reading him right? Is he a Cato-style collectivist “capitalist”?

          I dunno anything about him beyond that one article, but my sense was congruent with yours. It was just something I came across one day, tunneling through the ‘tubes. (How many degrees of separation does it take to hit, say, 98% of the web from Kevin Bacon, I wonder?)

          • http://splendorquest.com/ Greg Swann

            > And old knowledge must spark anew in each mind that acquires it.

            I agree. There has to be a shared knowledge of the notation system, as well, for the transmission of previously-discovered knowledge to take place.

            > I think there’s room for taking him to praise not *simply* mimicry, but incremental iterations by +1 not-quite-copy-cats.

            But these would not happen without the original idea. Until 2007, all smartphones were copies of Handspring’s reinvention of the Apple Newton. After 2007, all smartphones are iPhones. The +1’s consist of completely meaningless changes like the color of the shell. None of the copiers could have or would have come up with the iPhone on their own.

            I like the argument Rand gives to Ellsworth Toohey: “And it is said that but for the spirit of a dozen men, here and there down the ages, but for a dozen men — less, perhaps — none of this would have been possible.”

            One of the things I hope for, going forward, is greater creative courage on the part of individual people. Pretend-animality has been a very poor strategy for the human race. Accepting and embracing our humanity should have salutary results in the long run.

            • Nathan Stocker

              I might agree that there are only a few Big Ideas, adjudged by certain criteria, with a certain level of conceptual resolution, etc. But I wouldn’t so categorically dismiss all the evolutionary changes that make The Same Thing (again, with the same caveats) Very Much Better. I’m pretty glad that I don’t have to use a TRS-80 for my “personal computing”, for example. Smartphones are just computers and phones conjoined and made very portable. Where’s the Big Idea there? It’s just an awesome combination of pre-existing stuffs and vectors. And that’s fine: awesome hybrids are awesome.

            • http://splendorquest.com/ Greg Swann

              > Smartphones are just computers and phones conjoined and made very portable. Where’s the Big Idea there?

              Paradigm shift. The iPhone has changed the entire world of computing in five short years. I’m not arguing the particulars in any case.

              Google is giving me gas about this account, so the link may die: http://www.youtube.com/watch?v=DVoTTnf07jI

              Most copying is based not in any sort of originality but in collective thinking — which means not thinking.

            • Nathan Stocker

              > Paradigm shift. The iPhone has changed the entire world of computing in five short years.

              Sure. And look what caused that shift and changed the world: just another iteration of the way things had already been going for decades. Smaller, easier to use, multifunction, affordable. Peeps were talking about portable (even implantable) computer-communication devices over 20 years ago, and envisioning then every major function the iPhone bundled. (What they didn’t envision is all the *effects* of achieving it.) If anything, the iPhone came later than the enthusiasts were hoping. It’s a good implementation of much older *ideas*.

              I feel like I’m staking out a synthesis between two twains I don’t think never meet, while you’re representing one against the majority of the other.

              (Sorry, can’t watch the video here: no speakers on this computer, and I don’t have a smartphone, heh.)

  • Nathan Stocker

    The David Cartmans want to have their not-P and Q it too.

    (I almost put that on the Facebook link, but I s’pose it’d be a mite too cryptic for anyone’s good, there. Or maybe not.)

    • http://splendorquest.com/ Greg Swann

      > Cartmans want to have their not-P and Q it too.

      I love it. That’s just fun.

      Here is a line from my past, the psychology underlying a whole lot of rationalizations that start with the word “well” and end in an exclamation point: “Her hatred is her proof of her victimization, and her victimization is her license to victimize.”

      • Nathan Stocker

        Seems valid, though I’m not sure how much experience I have with being around overt haters and victimizers (unless you count myself in relation to myself).

        How much of that psychology would you say operates conceptually vs. subconceptually? (However those do or do not map to consciously and subconsciously.)

        • http://splendorquest.com/ Greg Swann

          > How much of that psychology would you say operates conceptually vs. subconceptually?

          All purposive human behavior is necessarily conceptual in origin. If you doubt this, listen for the rationale — the lie — which will have been pre-fabricated in advance to deflect your challenge. The reasoning can be poor, of course, and the underlying thinking can be influenced by biological pre-dispositions or topical circumstances, but purposive human behavior can only be conceived of, effected and justified in Fathertongue.

          • Nathan Stocker

            Does underlying psychology *just mean* something like underlying conceptual content, by your usage/understanding? That’s really what I was getting at by my second question. I agree that purposive human behavior is conceptual, that how genes manifest is based on how we choose to play the hands we’re dealt, etc.

            You distinguished between the rationalizations themselves and “the psychology underlying” them, so the question was not about the rationalizations (of purposive behavior) themselves, but their underlying influences.

            • http://splendorquest.com/ Greg Swann

              > Does underlying psychology *just mean* something like underlying conceptual content, by your usage/understanding?

              Which person? Which events? If you’re asking an abstract question, my guess would be that most behavior is habituated, and much of that habituated behavior will have originated when a particular person was without Fathertongue — a toddler. This is not a necessary consequence, but until you think about a particular habit, you haven’t thought about it.

              Even so, there will be a fully-elaborated logically fallacious pretext behind any evil act — where evil is defined as the actor doing something he knew in advance was morally wrong by his own moral standards. Often the actor will volunteer this lie without being challenged, so great will be the need to rationalize the behavior.

              > You distinguished between the rationalizations themselves and “the psychology underlying” them, so the question was not about the rationalizations (of purposive behavior) themselves, but their underlying influences.

              They’re superficially different but all the same: Manifestations of mindlessness. I’m only interested in mindlessness to the extent necessary to exterminate it. It’s not appropriate to a fully-human life. It’s vestigial animality — really pretend-animality.

              > I agree that purposive human behavior is conceptual, that how genes manifest is based on how we choose to play the hands we’re dealt, etc.

              I’m an INTJ in Briggs-Meyers and a High-D in the DISC system, naturally dominant but solitary by preference, a Lone Wolf in dog ethology. It is very easy for me to tell anyone to go to hell forever. Taped to my iMac, in huge type, is a sign that reads, “Where would I be without her?” Fathertongue trumps biology and habit, but not without effort.

          • Nathan Stocker

            (Hmm, not sure where this comment will appear. Where I’m typing seems downside-up, but I can’t reply to the bottom-most point of this branch, so here goes…)

            Yes, asking about the abstract.

            In terms of what I was trying to ask, I read you this way: Whether the self-license to victimize became habitual behavior before the age of reason or not, such behavior that persists as the victimizer matures must necessarily persist with some kind of conceptual rationalization, precisely because of what maturing as a being possessed of conceptual consciousness means. If that’s more or less it, then yeah, I’ll go along with that.

            I don’t know the DISC system at all. Briggs-Meyers has me as INTP, with the P/J being very, very close, and the N kinda iffy too, iirc. My main memory of it now is that many of the items presented false dichotomies I couldn’t really answer except by resort to arbitrary ad hoc parsing.

            • http://splendorquest.com/ Greg Swann

              > Whether the self-license to victimize became habitual behavior before the age of reason or not, such behavior that persists as the victimizer matures must necessarily persist with some kind of conceptual rationalization, precisely because of what maturing as a being possessed of conceptual consciousness means.

              I might split that up a little.

              The self-abstracted idea of a license to victimize would require Fathertongue. It’s too complex for a toddler. The kind of behavior you could observe in a toddler and then later expect to see rationalized in Fathertongue is raw dominance or submission: Rolling over people or being rolled over by them. That stuff is really simple in toddlers, but it will be manifested later in complex, rationalized moral lapses.

              I didn’t talk about this in the book, but the Age of Conceptual Fluency — learning to think in Fathertongue — is driven by the child’s own epiphany about the existence and nature of Free Will. Each individual child discovers that volition is not causally reliable in the way that physical causality or even animal behavior is, and, in consequence, that people can be manipulated by fallacious appeals. It’s fun to watch this happening existentially, in real time, because young children are such poor liars that behaviors that can be masked in older children and adults are pellucid and obvious.

              Much of the irrational behavior that psychologists are very careful never to cure originates at this time in the child’s life. I’m always interested to hear what is any particular person’s first conscious memory. Most people begin the uniquely human life with a memory of an outrageous injustice. The outrage is not the injustice itself, but the child’s to-then unchallenged belief that the adults around him would be reliably just in their behavior. When you get to someone’s core driver — the habituated behavior to be found in all of that person’s unexamined actions — you’ll find an infinite number fo replicas of that original injustice. I’m not prepared to defend that claim in any comprehensive way, but I see it again and again. My wife tells me that the enneagram idea is built around similar premises, but I’ve never looked into it.

              > My main memory of it now is that many of the items presented false dichotomies I couldn’t really answer except by resort to arbitrary ad hoc parsing.

              I can test ESTJ — sales monster — very easily. In the DISC system, I am 100% D — driver, doer, done — but I can speak to any crowd, no matter how large. That’s not extroversion, it’s getting the job done. The goads that drive our behaviors are only meaningful when we don’t think. When we do, we do what we purpose to do. That’s why all of the behavioral sciences, and all the biological sciences that speak to human behavior, should be held in extreme doubt: They measure only what does not matter in human behavior and ignore everything that does.

            • Nathan Stocker

              Yeah, I almost generalized out the “self-licensed to victimize” bit there in my take, tossed in an aside noting that that particular is post-toddler territory, and then proceeded. I see now I should have, heh.

              I’m told I started to talk before I was 8 months old. I don’t really know what my oldest memory is – seems to change from time to time. Now I have memories of memories that I’m not sure I’d say I have direct memory of, but they’re nothing really to do with injustice. Just snapshots of various things from the low-tech commune where I was born in British Columbia, and from the ride back to the States when I was 3.5 (btw, I didn’t see TV, hear broadcast radio or experience indoor plumbing till then). I can’t even name offhand what my earliest conscious memory of what I regard/ed as injustice is. In general, my response to injustice done to my person is less “OMG, you wronged me!” but more “WTF do you think you’re doing, living like that? Damfoo, you’re doing it wrong.”

              > They measure only what does not matter in human behavior and ignore everything that does.

              You don’t think there’s *some* non-negligible amount of “They measure the goads that drive our behavior, which are essential whats we have to think about unless we want to be mindlessly driven.” ? (I dunno, it’s nothing I’ve studied.)

            • http://splendorquest.com/ Greg Swann

              > You don’t think there’s *some* non-negligible amount of “They measure the goads that drive our behavior, which are essential whats we have to think about unless we want to be mindlessly driven.” ?

              I covered a lot of this in Man Alive! — the prejudices undergirding modern scholarship.

  • Jim Klein

    Wow…what a conversation. I wanna address the question about the consciousness of, ““Her hatred is her proof of her victimization, and her victimization is her license to victimize.” This is absolutely correct, but contra Greg, I don’t think it needs to develop as consciously as he implies.

    I understand that’s counter-intuitive, since nearly all things conceptual–and this surely is one of those–appear to be at the conscious level. This is why I prefer my wild theorizing on this particular point to both Rand (we sense and perceive as dogs and become human as we abstractly conceptualize) and Greg…basically the same process with a Master Originator who stuck with the Fathertongue until it spread.

    In my wild theory, we’re humans through and through. That is, the very process of sensing and perceiving developed as a distinctly human function. It’s not so wild, since the distinguishing characteristic is the ability to “distill out” attributes and symbolically represent THOSE. As we look at the ball, even as an infant, we see the “red” distinctly from the whole ball, as well as being round and whatever else. It is THIS that other animals can’t do, and the subsequent classification–internally symbolically, as opposed to “direct memory”–allows us to develop what Rand called our “first-level concepts.”

    This dovetails very nicely with things we do know, especially that an infant goes on to symbolize these classifications themselves, as perceptible phonemes (morphemes, I think they’re technically called). Plus, this otherwise small jump from how other animals perceive and store their perceptions, accounts for such a huge difference in us compared to them, even as we are so close genetically. As Greg has so well shown, once you can symbolize and then abstract the counterfactual, it’s off to the races with the subjunctive, alternatives, decisions and morality overall.

    So back to the original question. Should my theorizing be close, at least in principle, then it is quite possible to store various “symbolisms” without consciously forming them. Indeed, should the victimization in question happen at the critical developmental age of early conceptual formation, all sorts of crazy brain formations can happen. As I understand it, this is confirmed by the evidence of chemical deformations in the brain structure. This is not like today’s fancy of brain-scans that show higher activities here and there coincident with various choices (being violent, homosexuality, etc., etc., etc.). Most of those IMO are instances of the Post Hoc, ergo Propter Hoc Fallacy.

    This is rather actual chemical changes that arise from either severe physical trauma, or the associated disconnect from underlying subconscious assumptions (I’d call them identifications) concerning comfort, safety, love, etc. Ultimately I agree with Greg with how these manifest in later life…we do things only by choice, and those choices arise from the ENTIRE conceptual (symbolic) hierarchy that we’ve developed. I just thought it worthwhile to note that it is conceivable–as Nathan is sort of intimating, I think–that some of the bases of this conceptual hierarchy can arise without our choice or even knowledge.

    • http://splendorquest.com/ Greg Swann

      > that some of the bases of this conceptual hierarchy can arise without our choice or even knowledge.

      I don’t want to dispute your etiological claims, but I agree with this. There are a great host of irrational behaviors that take can root in the minds of particular people at the Age of Conceptual Fluency. Sexual fetishes like latex or shoes, for example, start here. Significant events at the time we are learning to apprehend the world in Fathertongue can be strongly influential for life.

    • Nathan Stocker

      I can’t say I find anything to dispute in Jim’s words, either. (I also can’t say as what I can or cannot find to dispute about such a topic (or any topic, for that matter) has much weight, but y’no. heh) And he grokked what I was getting it, and rather precisely, too: I was *not* asserting anything except that it’s conceivable. And yes, as I take Greg to say, even those psychological bases initially forming without choice or knowledge are nonetheless potentially subject to later conscious apprehension. But you have to do *that* thinking; to the extent you have *not* done so, you are mindlessly driven.

      Hm, searching for something to add or amplify, it seems to me there’s a connection between the ability to pick out “redness” or “roundness” *as if* they were things unto themselves (well, they *are* “things” – mental “things”) and the ability to conceive of counterfactuals. They’re both kinds of abstraction, but I mean more specifically: abstracting itself is of a piece with the conception of counterfactuals, in that redness is never distinct from the thing possessing it as an attribute, *except* as so conceived by and in a mind – also the only place counterfactuals can “obtain”. First you can envision redness and roundness apart from red balls; later you can envision that there is no ball, or that it’s a Rubik’s cube instead, etc.

  • Jim Klein

    It’s trivially true that any storage of any perception by any means, is not the thing itself, so in that sense it’s “counterfactual.” Still, I think there’s a worthwhile distinction to be had between the storage of an existentially (that is, sensually) derived perception, whether “direct” or symbolic, and the counterfactual abstraction of something that has never been sensed or known.

    Also, while you’re right that the red of the ball is in no way distinct from the ball itself, it’s not necessary that we conceive of it as such, for it to be taken as a singular attribute. Indeed, my wild theory hypothesizes that we do exactly that as we sense, “distilling out” various attributes and attaching an “internal symbol” to them. Thus does it become a snap to rapidly begin our vast classification system.

    • Nathan Stocker

      Yes, I wasn’t trying to say there’s no useful distinction there. (I’m not sure what a useless distinction would be, for that matter, other than, maybe: not a distinction.) Just: hey, there’s this oblong, and probably useless similarity. (I suppose I should have paid more attention to what its probable uselessness probably indicated, heh.)

      And again, I didn’t mean to say that you have to conceive of “redness” to be able to say “Yeah, that’s red. That’s red. Red. Not red. Red. Not. Not. Red.” Nor did I mean to suggest that the only way to pick out attributes is to pick them out as things themselves. Speaking of red, such were the flags raised when I went into that paragraph with the lead attitude of “searching for something to add or amplify”. Man, heh, it’s a perfect example of why I don’t do this writing, commenting thing (whatever it is I’m doing now) very often. Yeowch.

      Let’s imagine that ill-conceived paragraph said instead: The only place the picking out of red or the conceiving of counterfactuals happens is in conceptual minds.

      But then, if that’s all it was, what was that adding? Nothing. And far too much of it. Oy.

  • Jim Klein

    “The only place the picking out of red or the conceiving of counterfactuals happens is in conceptual minds.”

    Those are two very different events. Clearly counterfactuals cannot exist but for the ability to abstractly conceptualize them, or at least I think that’s pretty clear.

    As to how sensory impressions are stored in other animals’ minds, I think that’s less clear. To me, the likely difference between us and them is the referential symbolism, as opposed to a more “direct” storage. I mean, it’s referential in any case, but I think there could be a big difference between some sort of repetition of the neural status that occurs on the sensing, versus a symbolic denotation of that status. “Picking out the red” could occur in either case. But this is all just wild theorizing anyway, killing time until someone has something to say about Greg’s book.

  • Nathan Stocker

    Yes, they are two very different events. Very different events that happen in the same “space”. I think maybe that’s why it struck me as anything to point out, at all. But, unless the specific way of “picking out red” I meant was clear, it’s still prolly head scratching time. Perhaps putting quotes around “red” would have helped to indicate I was indicating “the picking out of red” qua symbolic denotation. Animals can distinguish colors/shades, but not *as* colors. Infants can distinguish red *as a color*, but maybe not have “redness” yet. Etc.