[Previous] curi's Microblogging | Home | [Next] Less Wrong Banned Me

Elliot Temple on September 13, 2020

Messages (174)

Overreaching, greatness, and ~meta-knowledge

Consider people who are *great* (like exceptional) at something in particular.

One of the things that makes them great is ~*meta-knowledge*, like knowledge about context regarding their *actions*.

I watched a bit of a recent Sea of Thieves WR speedrun - particularly the events during 7:25:00 -> 9:00:00 (it's like a 21hr run).

They lost like 1:20:00 from a choice to steal another crews loot b/c that crew chased them for a decent while.

A third ship joined in a bit, too.

Near the end of this chase (8:49:00) they spot another sloop (ship of 2 crew) and one guy jokes about taking this new ship's loot.

The two speedrunners have been talking about what to do at this point, and particularly risk/rewards tradeoffs for how to sell the loot.

The two guys are good enough to - ordinarily - take on another sloop no problem.

after all, they just fought off 2 other crews of sizes 4 and 3.

their choice not to go after the sloop (and the humor of the joke) is based in this like ~*meta-knowledge~ type stuff.

it doesn't matter how great you are at something, even the best ppl in the world know there are some challenges they won't win (or it's too much a risk), and the choose to back off. they're not OP just because they're the best in the world.

Generalising this means something like: the ~meta-knowledge is *at least* as important as the knowledge about how to do the skill well (which is more like technical knowledge). Or, at least it's that important at high levels.

Basically, this is like "don't overreach", or rather, if you do overreach, don't expect to *still* be great. the ability to pick challenges is part of the reason great people are great. sorta like flying *close*, but not *too close*, to the sun.

It also relates to knowing your limits, either when something is too big a task or when (and what) to learn before doing it.

This offers a bit more clarity for an ongoing conflict of mine - something to do with learning styles and methods. I intuitively think that 'exploratory' style learning (with a high(er) error rate) has benefits. and I mean it's not as bad as doing nothing at all (I guess it could be sometimes), but it's not as efficient as directed and non-overreaching learning.

I think part of the reason I have this conflict is in essence thinking too much of my own skills. That's true even tho, I went through a few ~breakpoints early on in the *Tutoring Max* series. (Breakpoints might not be the right word, but I think there are like significant points of increased ~reach when we adopt new and better ideas about ourselves)


Max at 1:30 AM on September 15, 2020 | #18021 | reply | quote

Mini post.m on post formatting

curi.us doesn't format exactly like markdown does. with markdown a newline between consecutive sentences doesn't make a new paragraph, but it does here.

I wrote the above in vscode (posted also on my site in a new category), so wrote it like normal markdown - using linebreaks to make sentences clearer, easier to read/write while editing, etc. (How the paragraphs are meant to look)

The solution is I'll need to check for that beforehand. I could write a short script to strip linebreaks between consecutive sentences, but not sure that's worth it.


Max at 1:34 AM on September 15, 2020 | #18022 | reply | quote

Inefficient learning is like eating the seed corn

> an ongoing conflict of mine - something to do with learning styles and methods.

Sometimes I prioritize the wrong thing. I'll spend time on fun (and maybe even slightly useful) 'intellectual' activities like coding, instead of doing more structured, efficient, and goal directed learning. That's like eating the seed corn.

It's like: I end up fed, and I still have some seed corn left over, but the harvest isn't going to be as good. What's the point of learning and thinking if not for the harvest?

indirectly related: #18025 and https://curi.us/2378-eliezer-yudkowsky-is-a-fraud


Max at 2:22 PM on September 15, 2020 | #18032 | reply | quote

#18032 Metaphorically seed corn = capital = e.g. machines. it's stuff you can save for later. time is somewhat different in that you can't save it for later. but, like money, you can spend time on things with later benefits (investment rather than consumption).


curi at 2:29 PM on September 15, 2020 | #18034 | reply | quote

#18034

> time is somewhat different in that you can't save it for later. but, like money, you can spend time on things with later benefits (investment rather than consumption).

I'm not sure if we disagree on something or not. I think we roughly agree but I'm thinking of time spent in a specific way (just a subset of the time we get). For context, I read curi.us/2378 a few minutes before having that idea. I liked these bits particularly (and liked being reminded of them)

> capital goods, not consumption goods

> accumulation of capital increasing the productivity of labor

I think time can be sometimes seen like money and other capital goods. How do people save money? One option is a bank account, but that performs poorly, and is sort of like investing in loans/debt, anyway. Better investors save money by spending it *on capital goods* (and they choose the goods). After they spend money, they don't have it any more, but they have something else they can exchange for money later.

I think *time spent on learning* is similar, but not all time spent is similar. Granted, there's no bank account for time, but you can spend time now so you get more of it later -- that's one of the reasons to learn and think, you - in essence - get more time in the future because you avoid making mistakes or being slower than you could be. In that sense it's like investing in productive capacity. There's a higher upfront cost, but you get a higher capacity and larger RoI than the alternatives. The choice to spend time learning ineffectively seems to me like spending some chunk of your factory budget on hookers and cocaine; fun at the time, but it's in opposition to the main goal.

Similarly, by analogy, learning skills that don't end up helping you, but learning them effectively, is like market risk. Not every investment makes a profit, but diversification helps, and the better you are the less you waste.

Time spent on things like downtime is different from normal money; that's more like $100 of food stamps you both get once a week and have to spend the same week. You might only be able to spend it at low-quality grocers, but avoiding spending it only hurts you.

An alternative thing about spending time on learning is trying to spend downtime doing pseudo-learning stuff. That's more like trying to invest your $100 food stamp (not going to go well). I find trying to do ~learning stuff when I'm tired etc. often means I stay up later, sleep worse, and have less high-capacity time for important things.


Max at 2:53 PM on September 15, 2020 | #18036 | reply | quote

Eating seed corn is like disassembling machinery for scrap metal, which is different (more destructive) than leaving it idle for a day (which sounds reasonably similar to spending a day of your time unproductively).


curi at 2:56 PM on September 15, 2020 | #18037 | reply | quote

#18037 yeah okay, I see what you mean. I've changed my mind on the quality of my analogy. (I don't think it's super bad or anything, just not as good as I originally thought.)


Max at 3:23 PM on September 15, 2020 | #18038 | reply | quote

Perimortem on intuitive response to #18037

My intuitive response (which would be put a bit defensively) is something like: disassembling machinery is like eating *all* the seed corn, and leaving it idle is like skimming a bit of corn off the top. Things keep working; there's still productivity and returns, but less than otherwise.

(note: I think this is valid, and it's why I don't think my analogy was all bad)

I think that intuitive response is wrong though. It's subtly moving the goal posts (similar to e.g. a "strategic" clarification), and would be expressing an idea like: "we're both right, we should blame miscommunication". That'd be dishonest though, because:

a) I didn't see some limits of the analogy that I do now - this contradicts the idea of miscommunication being a primary issue (it's not important if curi and I understood each other fully in every way; we understood each other sufficiently), and

b) the reasonable next steps from a miscommunication would be to figure out how to avoid it. Some miscommunications are due to like ~inferential distance but that doesn't make sense here. The easiest solution (if it really was miscommunication) would have been for me to be clearer originally. If I advocated that (and claimed I could have done it) I'd be pretending like there wasn't ever an issue; at the very least my lack of clarity would be an issue. Maybe I couldn't have been clearer for lack of knowledge, in which case it'd still be dishonest--and evasive--to claim a miscommunication b/c that wasn't the problem.

I don't know any way that my intuitive response would have been good, which is the reason I wrote this perimortem.

I'm not sure if putting the response in this perimortem is like a roundabout (and/or cowardly) way of trying to say the idea anyway. However, I think writing the perimortem is a better alternative than making the titular reply, so I'm satisfied for now.


Max at 3:23 PM on September 15, 2020 | #18039 | reply | quote

> I intuitively think that 'exploratory' style learning (with a high(er) error rate) has benefits.

Whether something is an error depends on your goal. If your goal is to get it correct, exploring works badly. If your goal is e.g. to get a rough overview, exploring works well.


curi at 9:42 PM on September 15, 2020 | #18044 | reply | quote

Max's postmortem on #18030 #18043 #18050

## Max's postmortem on #18030 #18043 #18050

IR wrote (addressing curi)

> i feel very much like i have gotten some of these ideas from you, but i dont know which things youve wrote that i got these ideas from. and i dont know how much ive changed them.

I asked IR:

> Otherwise, does it matter how much you've changed your mind?

which didn't make much sense. Context: #18030 #18043 #18050

I think 2 main things happened:

1. I wasn't careful when reading IR's comment, so missed important details / relations. (i.e. he was talking about changes to curi's ideas in his head, not changes to his own pre-existing ideas in his head)

2. I've been thinking recently about how my own ideas have changed over the last ~3 months.

(1) allowed me to ~*skip between trains of thought* without noticing. I ended up thinking about IR's comment in terms of (2). My question to IR makes more sense in this light.

Beyond the issue of miscommunication in general, there's a bigger problem I should care about and deal with. That is: responding earnestly to someone (usually) takes longer than reading what came immediately before. If I spend time responding to what I *thought* they wrote (but I'm wrong about that) then it's, in essence, wasted time. Maybe there are some benefits, but they're lesser than would be otherwise.

To avoid this sort of thing the obvious answer is reading stuff better. That doesn't feel super actionable tho b/c just concentrating more on ~*everything* I read is not v efficient, esp if this sort of issue isn't super common.

I could try re-interpreting what the person says, like re-writing out what I thought they meant before replying, but how would I know if that were right/wrong? It might make it clearer to me if I was *unclear* about what they thought. It doesn't help if I think I know what they meant and that idea is clear and consistent in my mind (as it was in this case).

This issue was - I think - that the reference "these ideas" is somewhat ambiguous (or maybe just tricky). I think IR's full sentence (expanding "them") is something like:

> and i dont know how much ive changed [my version of ideas I got from your ideas relative to the original ideas you wrote]

So, this might be a better sketch of what to do:

- recognise tricky references (ideally automatically)

- when tricky references occur, expand them out (there could be more than one possibility)

- criticise the possibilities so I get just one

- if I can't and it's ambiguous still, ask a clarifying question (listing the possibilities too)

- optionally respond to each possibility if short enough or easy enough

- if I get one and it's reasonable I can just respond

- if I get one and I'm not sure it's reasonable, ask a clarifying question and respond at the same time

the next step in this action-plan-sketch is "recognise tricky references (ideally automatically)". **The first part of that is introducing a breakpoint (in the coding sense) on tricky references.** I can do this a bit by paying more attention to references in general, trying to quickly figure out what they mean (and eval-ing if I know what they mean), and taking action if I don't. If I'm not 10/10 confident on the reference I should stop and investigate.

Okay, this feels like a decent PM and plan. Feedback welcome/appreciated. It was a bit trickier than normal to figure out what to do because a plan like 'learn2read' didn't feel good enough.


Max at 7:49 PM on September 16, 2020 | #18051 | reply | quote

>> I intuitively think that 'exploratory' style learning (with a high(er) error rate) has benefits.

> Whether something is an error depends on your goal. If your goal is to get it correct, exploring works badly. If your goal is e.g. to get a rough overview, exploring works well.

I hadn't considered this. It makes sense. That said, I don't think it's what I had in mind.

The italicised bits of this example are a bit of an outline.

An example is the route-finding-app I made for my SSOL speedrun: *I spent way too long* trying to get the PNG of the map as a background image behind the lines and points that get drawn. *Eventually I managed it* (after lots of different attempts and integrating bits of code I found online). *The main difficulty* was that the original author of the (simple) travelling salesman program used Haskell's GLUT library which is basically a *lowish-level* OpenGL lib (and *I'm not familiar* with low level opengl stuff). There are higher-level ones that make this stuff easy. *I only really cared about the outcome but it took way longer than I wanted it to.*

I didn't read a manual or in depth tutorial, instead tried to fumble my way though. That is sometimes faster. But you can't tell stuff like 'how long is left till I finish?' and other basic questions.

In some ways my process involved exploring as you describe. I toyed with the idea of switching to a higher level library, looked for higher level stuff that exposed/integrated with the lower level stuff (no luck), and read bits from the middles of some advanced/in-depth tutorials.

But, crucially, the exploring was a side-effect of a particular problem with the other bits. I'd say my choice of method when trying to get the PNG to draw on-screen was exploratory learning, so it's different to exploration as you describe (though somewhat related).

Eventually I found some code someone had written that was close enough to what I needed to make it work. There was a weird interaction with other code I'd written tho (involving drawing text), that meant the first line of text was the right size but all the other lines didn't appear on screen. I managed to fix that but it took another like 30 min of experimentation.

A better method - in hindsight - would have been to just do a tutorial for Gloss (an alternative opengl-based library, but much higher level) and recode what I'd already written, and the opengl bits that came with the app originally. I could have gone through enough of a tutorial on Gloss given the amount of time I spent (like 5hrs+).

I did learn other stuff during that time, but I didn't feel like the time was particularly well spent. I don't expect to use OpenGL + Haskell much in future, so it's not like this is particularly useful outside this one thing I wanted to do.

In some ways I do this stuff for the challenge, like thinking "I should be able to do this, so I will", but I don't think "I should be able to do this, eventually, but should I bother, or should I look for a different way to do the same outcome?"


Max at 8:10 PM on September 16, 2020 | #18052 | reply | quote

TCS and passions

I was thinking about a TCS issue yesterday. I have half a soln. It's about a child's passions

There's a possibly coercive idea I have that I think is the *common-er* version of the problem (maybe), then there's a more general version.

the possibly coercive version is like:

> I want my child to have a passion for maths (coercive), or

> I value passion about maths in general, and I want my child to be able to develop that if they want -- I don't want to *hinder* them (coercive?)

The second formulation feels like it could be done okay--without coercion--but I don't know enough to tell for sure.

I was thinking about this in the context of **a parent who's bad at maths**.

This made me think of a possible common issue *most* ppl would run in to if they tried TCS: *their skills/passions are inadequate (not broad enough and general enough) to avoid hindering the child.*

I think not being perfect is okay, but if we can avoid significant hindrance that's good.

One situation is if the child develops a passion for X but the parent isn't good/passionate about it, they can still buy equipment/supplies or hire tutoring or find a friend who's passionate, etc. This is the 1/2 solution I mentioned.

But more broadly, how do you facilitate the *development* of a passion before it's manifested?

One thing I was thinking about is when ppl have been passionate about something and sparked something in me. A good example is Haskell and type-safe programming; a guy at a technical meetup sold me on Haskell over a beer. It took me *years* before I actually used it in production, but I was sold in 20 min.

So exposing a child to a wide range of *passionate* people--who are probs the higher-value ppl to expose children to, anyway--is maybe one way, though that could be done corrosively. If you happen to be friends with passionate ppl and the visit and talk to your child, that feels different than like *engineering situations* to trick your child or something.

I haven't looked through the archive to see what other ppl have said on the topic, yet.


Max at 12:57 AM on September 19, 2020 | #18066 | reply | quote

correction s/corrosively/coercively

s/corrosively/coercively


Max at 12:59 AM on September 19, 2020 | #18067 | reply | quote

Quick thought on a secondary goal of life.

I think a good secondary goal for ones life--or maybe another primary goal as yesno supports--is to live without control. By that I mean: live so that you are both happy with your decisions consistently, and also make those decisions without willpower or self-control. All choices are somewhat like using self-control and some moral code; except that you have no animosity against those choices; they are choices you'd always make anyway. It's sort of like having no friction.

Ofc there will always be conflicts and problems to solve, but this state is like the closest you can get to that *and sustain*.


Max at 9:00 PM on September 19, 2020 | #18080 | reply | quote

Debate Topic (via Tutoring Max 44) -- Genes and direct influence over mind

> Genes (or other biology) don’t have any direct influence over our intelligence or personality.

I'm not sure about this. I don't think humans being universal understanders/explainers means genes *don't* have a direct influence over our mind/personality (esp. starting conditions). It seems reasonable that physical effects on the brain can have an effect on our mind/thinking (e.g. brain tumors, head trauma), and genes affect things in ways we don't fully understand, so there's room for them to have a direct effect.


Max at 7:14 PM on September 27, 2020 | #18153 | reply | quote

#18153 What sort of effect or influence do you have in mind, via what causal mechanisms?

For example, genes could make it so we're better at integer math than floating point math. I don't think this would cause someone to be more inclined to solipsism than an alien that excels at floating point math. And there could be variance among humans, but I don't think that would cause some people to be atheists.


curi at 7:15 PM on September 27, 2020 | #18154 | reply | quote

#18154

> What sort of effect or influence do you have in mind, via what causal mechanisms?

I'm not sure about this possibility, but it's a thing I've heard or seems to be a somewhat common idea:

- temperament: Say someone has a gene that means they produce lots of some hormone. That hormone makes them angry more often / more easily.

Does this sort of thing count as a direct influence over our personality? I can see a person like this 'learning to control' themselves or something, but I'm not sure exactly what you mean by directly influencing personality.

More broadly, I see room for unknown causal mechanisms, esp. relating to things that make sense to have evolutionary roles, like social stuff. I could see some genes play a role in how readily someone accepts static memes based around certain social signals (e.g. in group/out group stuff).

> For example, genes could ... but I don't think that would cause some people to be atheists.

I agree that there are ways genes could affect our brains at a lower level (like an instruction set affects CPU performance) and that this sort of effect isn't substantial.


Max at 7:28 PM on September 27, 2020 | #18155 | reply | quote

> - temperament: Say someone has a gene that means they produce lots of some hormone. That hormone makes them angry more often / more easily.

Hormones are low level. Behaviors and emotions are high level. It's kinda like suggesting that heating a room with a CPU in it might result in video game bosses attacking more aggressively. Low level changes do not cause high level changes that have the appearance of complex design unless there's a specific causal mechanism set up to enable this (e.g. sleep or volume button on a computer).

> Does this sort of thing count as a direct influence over our personality?

You could get annoyed more when hot or cold. Does that mean heat and cold influence personality? I think how one responds to heat, cold or hormones is part of what one's personality is. But they aren't controlling your reactions. The reactions are your choice based on your ideas.


curi at 7:31 PM on September 27, 2020 | #18156 | reply | quote

> Low level changes do not cause high level changes that have the appearance of complex design unless there's a specific causal mechanism set up to enable this

*A*: Don't we have a (rudimentary) explanation for hormones affecting thoughts, though? I know--personally--I think different things when in different moods (at least I think that's the case).

> I think how one responds to heat, cold or hormones is part of what one's personality is. But they aren't controlling your reactions. The reactions are your choice based on your ideas.

*B*: It feels like you're implying reactions are core to understanding personality, like the only way we can inspect personality is via its effect on our reactions.

I googled 'personality' and found a sensible-feeling definition about patterns of thoughts, feelings, and behaviours. Those are all based on ideas, so by that definition personality is just a collection of ideas.

----

I'm not sure if part A and B contradict each other. I'm not super happy with this reply but I think the result might be going to back to another part of the conversation.

PS, I labeled the paragraphs to refer to them, hopefully that made sense when reading.


Max at 7:49 PM on September 27, 2020 | #18157 | reply | quote

> Don't we have a (rudimentary) explanation for hormones affecting thoughts, though? I know--personally--I think different things when in different moods (at least I think that's the case).

Are you linking hormones to moods? You bring up something about hormones affecting thoughts but then the next sentence doesn't mention hormones.

> It feels like you're implying reactions are core to understanding personality, like the only way we can inspect personality is via its effect on our reactions.

I don't think that and I don't see how my text implied it.

> so by that definition personality is just a collection of ideas.

I agree with that.


curi at 7:51 PM on September 27, 2020 | #18158 | reply | quote

#18158

> Are you linking hormones to moods? You bring up something about hormones affecting thoughts but then the next sentence doesn't mention hormones.

Yes. I think most ppl presume a super tight relationship between them. That doesn't seem right--thinking about it now.

*Some* effect might be there, but that's like a transition between levels of emergence, and probably means I don't have a point here.

Going to drop this angle for the moment.

> > It feels like you're implying reactions are core to understanding personality, like the only way we can inspect personality is via its effect on our reactions.

> I don't think that and I don't see how my text implied it.

Given you agreed with "personality is just a collection of ideas" I'm not sure this is important to discuss unless you think so. I can explain why I thought the implication was there if you want.

**concluding comment**: I think I agree with you that hormones don't influence personality/thoughts in a substantial way (I think you agree with that at least).

I think at this point it's up to me to come up with some other causal mechanism? Or the only other node on my conversation tree I have to look into atm is mine about unknown causal mechanism.


Max at 8:27 PM on September 27, 2020 | #18160 | reply | quote

> Going to drop this angle for the moment.

Do you think your made an error? If so how'd that happen?

> I can explain why I thought the implication was there if you want.

Yes I'm curious.

> I think at this point it's up to me to come up with some other causal mechanism?

That's an option. Another is I could play devil's advocate and take the other side of the matter. Another is you could ask questions or think about stuff like how reacting to a hormone differs from reacting to an event like a sick parent, winning a competition, getting a high or low grade, etc. Our emotions and moods are causally connected to all sorts of things but the basic point is the connection is governed by our ideas: we can decide how to react to a particular event and if we had different ideas we'd react differently. The hormone/genes/etc ppl are claiming roughly that something different/special is going on in their case. Having a clearer idea of what the claim is helps with evaluating it.


curi at 8:31 PM on September 27, 2020 | #18161 | reply | quote

> Do you think your made an error? If so how'd that happen?

Yes, will do a postmortem in a different post.

> Yes I'm curious.

Cool, will also put this in a diff post because it feels off-topic.

> That's an option. Another is ...

I want to take a bit to think about where to go from here. I didn't really consider how many possibilities there were. Some of those options I might be able to follow myself (like a thought experiment) to see where they lead.


Max at 9:05 PM on September 27, 2020 | #18164 | reply | quote

BTW, @curi, I think it was good we didn't do the Bitcoin option today. This feels (and felt) valuable even though I don't think of myself as anything close to an expert


Max at 9:06 PM on September 27, 2020 | #18165 | reply | quote

Thought on why FI is special

I've been thinking about why FI is special/different. It's related to the general topic of FI and new ppl, their reactions, etc.

curi said in Discord:

>> [12:14 AM] Laozi Haym: it isn't anything new i need to watch what I say, I just ...was watching what I was saying 2 days ago on here

>

> if you're mistaken about something it's better to say it and get criticism rather than hide the error. so i generally don't like people watching what they say. and feeling pressured about it sucks too.

>

> sometimes people try to say only their highest quality ideas but they don't go through life using only those. most of the time they're not at their best. what you do when you're tired or distracted is part of your life, and should be exposed to criticism too.

I think part of FI being different is to do with the culture related to things like "if you're mistaken about something it's better to say it and get criticism", and "what you do when you're tired or distracted is part of your life, and should be exposed to criticism too."

When people come to FI they don't expect other parts of their life (maybe implied by things they said) to be questioned. It's doesn't adhere to normal social norms. That's--in part--b/c those ppl and normal social norms don't value stuff like: error correction, every person and discussion being a potential beginning of infinity, the capacity for ppl to make progress (esp rapidly), etc. There is some lip-service paid to these ideas, and they're taken somewhat seriously in dire situations, but they're not like culturally ubiquitous, common, or expected.

That lip-service is part of the reason pointing out those things individually doesn't work to differentiate FI; everyone says it, and everyone says they're honest. But the culture is different; what's tolerated, what's expected, what's prioritized, what things are seen as important.

Even that paragraph doesn't work outside this sort of context. I don't expect it would convince anyone who didn't already understand it (at least: understand it enough to know what I was trying to get at and whether I had mistakes, etc).


Max at 12:42 AM on September 28, 2020 | #18168 | reply | quote

#18161

>>> I think how one responds to heat, cold or hormones is part of what one's personality is. But they aren't controlling your reactions. The reactions are your choice based on your ideas.

>> It feels like you're implying reactions are core to understanding personality, like the only way we can inspect personality is via its effect on our reactions.

>> I can explain why I thought the implication was there if you want.

> Yes I'm curious.

I think this was what I was thinking:

- response to stuff like heat is a part of one's personality

- the stimulus doesn't control your reactions

- reactions are choices based on one's ideas

- so there's a chain like: stimulus -> physiological signals -> interpretation (ideas) -> meaning (ideas) -> choice of behaviour (ideas) -> response/reactions

- personality is included in this chain only via the links that have `(ideas)`

- we can't see the ideas, but we can see the response/reactions (the outcome)

- (premise) to understand someone's personality we need things we can study / think about

- the reactions and stimulus the only parts of that we can easily agree on without like inference/explanation

- stimulus doesn't tell us about personality

- reactions and response do, though

- so reactions are key to understanding personality


Max at 6:33 PM on October 1, 2020 | #18185 | reply | quote

Postmortem on hormones-mood link

>>>> Don't we have a (rudimentary) explanation for hormones affecting thoughts, though? I know--personally--I think different things when in different moods (at least I think that's the case).

>>> Are you linking hormones to moods? You bring up something about hormones affecting thoughts but then the next sentence doesn't mention hormones.

>> Yes. I think most ppl presume a super tight relationship between them. That doesn't seem right--thinking about it now.

>> *Some* effect might be there, but that's like a transition between levels of emergence, and probably means I don't have a point here.

>> Going to drop this angle for the moment.

> Do you think your made an error? If so how'd that happen?

### postmortem

I implied mood and hormones were linked. I didn't explicitly mention it.

When curi pointed out I linked them I realised that I was presuming an intimiate relationship and that I didn't have a good explanation for it.

There's a ~common idea that they're intimately linked. I think, in general, it's a good way for ppl to avoid taking responsibility for their reactions to stuff. e.g. women a more irritable on their period and so they shouldn't be held to as high standards / ppl should be more forgiving of them getting upset / etc. This is roughly called a 'mood cycle', which is explicitly linked to hormonal cycles of the same length (I've heard 28 days for women and 33 days for men).

When curi pointed out my linking hormones and moods I thought about the common idea and questioned it. I didn't question it when I first used it though. Why didn't I question it?

Intuition: In general when we're thinking about something particular there are ideas that are 'in the front' of our mind and other ideas 'in the back' of our mind. We are actively engaging with the 'front' ideas, but not the 'back' ideas. (Maybe the 'back' ideas could be called background knowledge but that term feels like it describes a slightly different thing.) To question an idea it needs to come to the 'front'. It's sort of like a module of code: we interact with the API but we don't interact with the internal logic. When ideas are at the 'front' we're looking at the internal logic and API, but at the 'back' we're only looking at the API. We use shortcuts to know how ideas at the 'back' interact with stuff.

So by that intuition: I had the hormones->mood link in the back of my mind and didn't think about the internal logic until curi brought it to the 'front' by pointing it out.

I'm a bit worried that this is just a long winded way of saying something like 'lazy thinking', but it feels like there's probably more to it, so I'm okay with it for the moment.

One of the ways I could avoid this is by categorizing old 'background' ideas (like the hormones-mood link) as stuff I need to reconsider before using. In some ways it doesn't matter much if I get ~lots better at thinking WRT 'front' ideas, but keep using bad ideas as foundations without questioning them. So I need to make a habit about questioning ideas I use as a foundation if I haven't considered them since improving my thinking. There are practical limits on this, like lots of my preexisting ideas are fine (or at least fit-for-purpose at the time) and reconsidering them consistently would be significant overhead. If I'm using ideas as part of my reasoning, though, that's a good reason to reconsider them, at least briefly.


Max at 6:55 PM on October 1, 2020 | #18186 | reply | quote

#18185 Be careful with complex interpretations of other people. Often you should check if they agree instead of assuming you got it right. And I don't think I said stuff that corresponds to your "core" or "only way".


curi at 7:48 PM on October 1, 2020 | #18187 | reply | quote

#18187

> Often you should check if they agree instead of assuming you got it right.

I think I was trying to do that with:

>> It feels like you're implying reactions are core to understanding personality, like the only way we can inspect personality is via its effect on our reactions.

If that wasn't clear, is there a good way to do it better? I could explicitly say "to check I have this right, are you implying ... ?". That feels cumbersome though.


Max at 7:58 PM on October 1, 2020 | #18188 | reply | quote

#18186 ok so how would you revise your original claim:

> Don't we have a (rudimentary) explanation for hormones affecting thoughts, though? I know--personally--I think different things when in different moods (at least I think that's the case).

(you may want to grab more text/context to also revise)


curi at 7:59 PM on October 1, 2020 | #18189 | reply | quote

#18188 "feels like" is kinda vague but generally (when there aren't clear emotions involved) reads similar to "i think". i don't read it as a question or requesting confirmation.

A question version at around the same length is:

> Are you saying reactions are core to understanding personality, like the only way we can inspect personality is via its effect on our reactions?


curi at 8:01 PM on October 1, 2020 | #18190 | reply | quote

#18161

>> I think at this point it's up to me to come up with some other causal mechanism?

> That's an option.

I have a few ideas for casual mechanisms:

* genes encode some ideas which are 'given' to us early in life

* so there could be flow on effects

* this isn't really a _direct_ influence on thoughts, though.

* or maybe: ideas have different classes of components e.g. like ideas about 'relationships between people' are one of those possible components. if there are optimisations the brain has that directly relate to some phenotype (like volume of that brain-part) then the weighting between generation of idea-components could differ thus ppl with certain genes are more likely to think of certain stuff.

* note, after I wrote "ideas have different classes of components" I strongly questioned why I wrote that, I don't think I have a good reason. I think that is reflected in the following 2 points:

* but we don't know anything about these idea-component things

* so this 'causal mechanism' just kicking the can down the road by introducing another unknown casual relationship as part of this explanation

So I don't think I have any good ideas for casual mechanisms.

I don't think I could convince myself that genes have a direct influence over our thoughts. But I can't convince myself they *don't*, either. I can convince myself that I shouldn't believe they do.

I'm open to other ways to move the conversation forward if you have ideas.


Max at 8:03 PM on October 1, 2020 | #18191 | reply | quote

Formatting:

The dotpoints above are in this hierarchy:

- 1

- 1.1

- 1.1.1

- 2

- 2.1

- 2.2

- 2.3


Max at 8:04 PM on October 1, 2020 | #18192 | reply | quote

> * genes encode some ideas which are 'given' to us early in life

Consider a gene pool of, say, wild dogs. Using nanobots, you tinker with it. You sterilize or kill some dogs, or manufacture others, or whatever. You don't make huge changes. You just change the initial conditions. Then you leave the dogs alone for 100 generations.

Do you expect the tinkering to change the end results much? In general I don't. The selection pressures of the environment will control the results. E.g. if you make the dogs have more fur on average, but it's a warm climate, then I think they'll end up with less fur anyway.

Similarly, I don't think the initial ideas in the brain matter a lot. Make sense?

Another pov is you can build ruby on C or java foundations and have the same language. Once you add a few layers of abstraction over the initial functions/APIs/whatever, then the details of them end up not mattering (unless e.g. they were really broken or manage to cause ongoing performance issues).


curi at 8:06 PM on October 1, 2020 | #18193 | reply | quote

> I don't think the initial ideas in the brain matter a lot.

for clarity: so you think it is possible we have ideas encoded in genes that are given to ~everyone during prenatal development (or shortly after birth, w/e)?

the idea that the *initial ideas* in the brain don't have any long term significance on our thoughts (and genes can give us some initial ideas) is a stronger and different position than I thought you had.


Max at 8:12 PM on October 1, 2020 | #18194 | reply | quote

#18193 I had in mind a dog geneticist who just sorta screwed around a bit.

If he specifically tries to cause a specific result, and puts a bunch of creativity and scientific study into figuring out what changes will cause it, then he might manage to cause it. If he can predict the environment and what'll happen evolutionarily, he might figure out what to do to the gene pool to get a specific feature to be present 100 generations later that wouldn't be present otherwise.

Does biological evolution put that kind of major design effort into controlling high level human ideas like whether someone is an inductivist? No. It doesn't even have knowledge of those things (like induction), let alone knowledge of the whole future memetic selection pressures and evolution of ideas and creation of layers of abstraction and so on that'll happen from ages 0-25. To cause being an inductivist at age 25 would require not only knowledge of inductivist (as expressed in an appropriate framework that makes sense in our present day culture), it'd also require knowledge about that whole childhood and education process and how to manipulate and control it.

How could genes do all that? And even if they theoretically could, there were no selection pressures to cause them to do it in general. You can pick tons of ideas – like that painting is better than sculpture, or that math tests should ban calculators, or that Uber should be allowed into cities immediately despite complaints by taxi drivers – and it makes no sense that genetic evolution would have set things up to control that. Maybe you could try to come up with a few special cases and an explanation of a causal mechanism, but the standard thing is no causation like this.


curi at 8:14 PM on October 1, 2020 | #18195 | reply | quote

#18194 I think our genes set us up with adequately powerful and generic hardware + OS + maybe some initial default apps that are replaceable. I don't think these end up mattering that much cuz of choice, abstraction layers, and universality – no missing features/capabilities. As far as their ability to bias us in a particular direction (as in variance in these things between people could make some people more mathy and others more artsty, or some people more angry and some more calm), while it's not exactly zero, I think it's tiny compared to how much culture and childhood and thinking about stuff matters. It's just a drop in the ocean. (This is also DD's position btw.)


curi at 8:17 PM on October 1, 2020 | #18196 | reply | quote

> stronger and different position than I thought you had.

What did you think I thought and what's the difference?


curi at 8:17 PM on October 1, 2020 | #18197 | reply | quote

#18196 And I don't think the variance between people is anything like intel vs ARM chips or windows vs. linux OS. Even that isn't such a huge deal, but genes created a particular hardware and OS design and variance is limited to be more minor and not break things. Variance isn't gonna be so huge as to create a drastically different design.


curi at 8:19 PM on October 1, 2020 | #18198 | reply | quote

#18154

> What sort of effect or influence do you have in mind, via what causal mechanisms?

I'm not sure about the causal mechanism, just that this is *an* effect and it's argued that it happened via evolution at the gene-level.

I think I might have some counterexample to the idea that genes don't play a significant role in thoughts. It's part of a bigger idea, though. I'll try and outline relevant parts of the video.

(I've bolded the key phrases)

- Lindybeige has a **theory on why women have breasts**

- He **explains why other theories aren't sufficient** (e.g. there's one idea that women have breasts to signal fertility, etc, and that theory compares humans to other animals like primates; this is refuted b/c other species have no *permanent* signs of fertility)

- There's a bit about the **EEA (Environment of Evolutionary Adaptedness) and evolutionary context** / selection pressures / social dynamics at the time (social dynamics here means like 'dynamics of hunter gatherer society')

- There's a (conjectured) **chain of reasoning and events** he goes through in early (modern) homo sapien development involving **secret menstruation and how sexes would 'react' for evolutionary advantage**

- part of that conjecture is **male reaction to sexual signals ~flipping** to avoid being unattracted to fertile women

- and this eventually ends with women having permanent breasts

It's that second to last part about male reaction ~flipping that I think might be a counter example.

The video: https://www.youtube.com/watch?v=oWkOvakd9Mo

The reason I think it's a counter example is that this would be a way genes significantly changed thoughts. (assuming ideas like 'she's attractive' and 'she's not attractive' fit the bill for what we're considering.)


Max at 8:27 PM on October 1, 2020 | #18199 | reply | quote

> - part of that conjecture is **male reaction to sexual signals ~flipping** to avoid being unattracted to fertile women

The idea of ~flipping is roughly:

- animals are attracted to symbols like swollen breasts / butt, particular inflammations, temporary colouring, etc.

- animals (all but humans) don't have breasts when they don't need them. They only grow them when necessary, and they're not swollen at other times

- modern women have ~swollen breasts *all* the time (there's some difference between lactating/not lactating but it's minor compared to other animals)

-- maintaining breasts costs resources, there's an evolutionary reason not to do it

- the male reaction to swollen breasts is to *not* be attracted b/c it means the female isn't fertile (this is true in other animals)

- human males around the time women developed permanent breasts had this reaction too (along with other things like fatter -> good -> more resources / better chance of children surviving)

- one evolutionary reaction could have been to like fix the 'pattern' for what males found attractive (e.g. breasts -> good now, fatter -> still good)

- but the *simplest* change necessary is just a binary 'not' - i.e. things that weren't attractive now are, and things that were attractive aren't

-- admittedly (thinking about it now) why didn't humans die out because malnourished women were selected over non-malnourished?

- so males had this gene flipped by evolution and breasts were attractive now

This sounds like a way genes had (and have) a significant role in thoughts.

Possible criticism: this is just an idea we get when we're young and some people change it, some don't, but it doesn't mean genes have a *substantial* role in affecting thoughts, just that like this one inborn(?) idea is different.

I marked inborn with a (?) because I'm not sure I'm using it right.


Max at 8:37 PM on October 1, 2020 | #18200 | reply | quote

> The reason I think it's a counter example is that this would be a way genes significantly changed thoughts.

It's useful to think through what sorts of genetic effects on thoughts are important and why.

E.g. being tall correlates with the thought "I like basketball" or "I want to be in the NBA" at age 25.

Genes did not evolve to have knowledge of basketball or the NBA. Height genes are just about height.

The causality here is cultural. Culture reacts to (partially) genetically controlled traits like height.

Similarly, culture has some reactions to e.g. hair and eye color, which genes have substantial control over (barring bleach, dye, colored contacts, etc).


curi at 8:38 PM on October 1, 2020 | #18201 | reply | quote

#18200 So once upon a time humans were animals. Apes or something. Not yet intelligent. And they had behaviors controlled by genes just like cats do.

Did humans get permanent breasts then or later (after intelligence)? I'm not clear on the claim/story yet.

Anyway, later, humans become human/intelligent. Then they have memes. And memes start taking over control of lots of stuff including sexual preferences, courtship behaviors, etc. Memes evolve faster than genes and have access to better control over adult humans – ideas are in a better position to effect behavior than protein design at ~birth is.

If humans evolved permanent breasts before memes, there's no real issue, right?

If humans evolved permanent breasts after memes, that'd be more complicated. Does Lindybeige claim or address that?


curi at 8:43 PM on October 1, 2020 | #18202 | reply | quote

#18200 Overall, you or we could go into more detail on this example, but maybe you'd be content to consider in enough of an unknown, with lots of uncertainty, that it's no reason to reject a model of how intelligence/minds/genes/etc work. I don't see that it's very important to look into this particular example more.


curi at 8:48 PM on October 1, 2020 | #18203 | reply | quote

> If humans evolved permanent breasts before memes, there's no real issue, right?

Agreed

> If humans evolved permanent breasts after memes, that'd be more complicated. Does Lindybeige claim or address that?

I can't find a reference to dates more specific than ~last 2.5 million years (the Pleistocene). If he did mention a more specific date I don't recall it and can't find it via some quick searches.


Max at 8:48 PM on October 1, 2020 | #18204 | reply | quote

> maybe you'd be content to consider in enough of an unknown, with lots of uncertainty, that it's no reason to reject a model of how intelligence/minds/genes/etc work. I don't see that it's very important to look into this particular example more.

Yeah, I'm content to do that. It's not clear it's a counter example (and even if it were there are lots of issues/unknowns still)


Max at 8:49 PM on October 1, 2020 | #18205 | reply | quote

The discussion about genes and intelligence above is discussed and written in:

https://youtu.be/BDwiP4lsC_4

and

https://youtu.be/1J6ECV9L11g


curi at 9:05 PM on October 1, 2020 | #18206 | reply | quote

Conversation tree so far (recent entries less refined than older ones): https://maxkaye.s3.amazonaws.com/2020-09-28-curi-genes-int-tree-exported-2020-10-05.pdf


Max at 6:55 PM on October 4, 2020 | #18230 | reply | quote

Tangent near the end of a patio11 thread:

https://twitter.com/patio11/status/1315157487633354753

> Lots of cryptocurrency projects think that there is a way for any part of their ecosystem to be done by non-professionals in the long-run and they are all fools.

> Miners, devs, promoters, capital, etc, will all be professionalized.


curi at 12:50 AM on October 11, 2020 | #18280 | reply | quote

#18280 I agree. There's a lot of naivety around 'decentralisation'.

There are caveats, though. Increasing the accessibility of some previously professionalised thing (e.g. arbitrage) can result in more people do it--and at lower volumes. But, in those cases, the professionalisation is just moving from the person doing the thing to programmer(s) maintaining the feature.


Max at 2:48 PM on October 11, 2020 | #18288 | reply | quote

(Tutoring Max #49) There are no conflicts of interest between rational men.

Talking with curi during Tutoring Max #49

Topic: There are no conflicts of interest between rational men.

----

## rough brainstorming

idea seems to be

- if people want to do good / make progress / improve something

- then that has to be compatible with objective reality

- reality is such that we can't choose the right path to make progress

- rational people will focus on a goal (which is not doing harm to someone particularly)

- and the method to get that goal has to be compatible with objective reality

... (idea feels unclear so I'm swapping brainstorming topic)

'possible' solutions

- violence

- compromise

- 'winner' pays 'loser'?

- auction -> one person no longer wants the job?

----

What is the scenario, what is the conflict, and why is it not fixable?

## scenario

Alice and Bob both want a particular job. They are both suitable applicants. There's only one job, so at most only one of Alice,Bob can get the job.

## conflict

Alice/Bob are competing for a scarce resource. They might think that their life would be worse if they didn't get the job.

## fixableness

There are ways to fix it by introducing e.g. another position like the first, but is it fixable without introducing stuff?

Alice/Bob could talk and one could persuade the other it'd be better not to have it.

Fixableness has a time constraint -- knowing a solution might be available in the future doesn't help the problem now.

So for it to be 'fixable' we'd need a solution that generally applies to all situations like this, and we need to be able to apply the solution right away.


Max at 7:31 PM on October 15, 2020 | #18320 | reply | quote

#18320 It's important to think about scenarios in reality. Say the business owner, Joe, wants to interview both Alice and Bob, and then wants to hire Alice not Bob.

In what scenario does Bob get the job? What series of events? What exactly does Bob want to be different (or or in the recent past) and by what means would that change be achieved?


curi at 7:33 PM on October 15, 2020 | #18321 | reply | quote

#18321

> It's important to think about scenarios in reality. Say the business owner, Joe, wants to interview both Alice and Bob, and then wants to hire Alice not Bob.

This sounds like a situation where, if Bob knew Joe's thoughts, he shouldn't want the job. If Joe's already made up their mind, wouldn't that be a reason for Bob to spend efforts on other opportunities?

> In what scenario does Bob get the job? What series of events? What exactly does Bob want to be different (or or in the recent past) and by what means would that change be achieved?

Bob get's the job if Joe changes their mind, or Alice finds another job (or otherwise withdraws).

Joe might change his mind if s/he finds out something bad about Alice, or if it turns out Joe's idea of Alice was wrong. There could be lots of ways that happens, but it's not something that can be relied upon. Joe might also learn something new about Bob.

Generally it seems like either Joe or Alice would need to change their mind or learn something new for things to end up with Bob getting the job.

Bob wants Joe's opinion to change (the opinion that Alice is the better one to hire). Bob could do a really good interview and persuade Joe -- or something like the above could happen.

I guess something unexpected could happen too (like Alice getting hit by a bus) but I don't think Bob wants that so it seems pointless to expand on.


Max at 7:41 PM on October 15, 2020 | #18322 | reply | quote

#18322 It’s in both job seeker’s interests that jobs are given out according to the capitalist system where the business owner or his proxy decides who to hire. If he hires Alice, there’s no way Bob could have that job other than if a different system were in place. But that system would make everyone much worse off including Bob because it’d involve limitations on freedom, government meddling in the economy, pointing guns at people to get jobs from them, or something else bad.

People commonly have mutual interest that something is decided by a certain method which has good traits like being fair, free or rights-respecting. That a particular outcome goes against me doesn’t mean it’s in my interest to change the system itself. With capitalist hiring, I’m much better off applying for some other jobs than living in a society without a capitalist economy.


curi at 7:42 PM on October 15, 2020 | #18323 | reply | quote

If Joe is bad at hiring, that may be bad for me. I may get a worse result. But it's bad for him too. This isn't a conflict between me and Joe. He's trying to deal with life and hiring well. If he's doing it poorly, that's due to ignorance, lack of skill, etc., not due to what benefits Joe and what benefits me being in conflict.


curi at 7:44 PM on October 15, 2020 | #18324 | reply | quote

# 18323

> It’s in both job seeker’s interests that jobs are given out according to the capitalist system where the business owner or his proxy decides who to hire. If he hires Alice, there’s no way Bob could have that job other than if a different system were in place. But that system would make everyone much worse off including Bob because it’d involve limitations on freedom, government meddling in the economy, pointing guns at people to get jobs from them, or something else bad.

Re particularly:

> If he hires Alice, there’s no way Bob could have that job other than if a different system were in place.

One way Bob could have the job is if Joe had better ideas -- in the case Joe has mistakes in his thinking. That seems like it'd be compatible with the same system. If we're presuming Joe is rational, isn't that a somewhat high bar? I'm not sure everyone could measure up to it.

> But that system would make everyone much worse off [...]

I agree for lots of these possibilities. Systems that use violence to enforce rules on this sort of thing would be bad.

> People commonly have mutual interest that something is decided by a certain method which has good traits like being fair, free or rights-respecting. That a particular outcome goes against me doesn’t mean it’s in my interest to change the system itself. With capitalist hiring, I’m much better off applying for some other jobs than living in a society without a capitalist economy.

* this sounds like approximately: principles trump circumstance

* * it's better to be working within a good system than profiting in the short term from a bad system, even if a circumstantial outcome is superficially less good for you.

I agree with: a world with short term good outcomes from a bad system is worse than a world with a good system.

Do you think there are any other methods by which jobs could be handed out? Does Joe having better ideas count as another method?


Max at 7:53 PM on October 15, 2020 | #18325 | reply | quote

#18324

> If Joe is bad at hiring, that may be bad for me. I may get a worse result. But it's bad for him too. This isn't a conflict between me and Joe. He's trying to deal with life and hiring well. If he's doing it poorly, that's due to ignorance, lack of skill, etc., not due to what benefits Joe and what benefits me being in conflict.

Okay, I see how this answers the idea that Joe's ideas have something to do with a conflict of interests. It'd be in both your interests for Joe to be better at hiring if he was bad at it. But Joe can't magically get better. So Joe just is what he is in that role. It's better he make a free choice than be coerced or something. So any alternative system that coerces him is worse, and in any system where he has a free choice he'd act roughly the same anyway.


Max at 7:58 PM on October 15, 2020 | #18326 | reply | quote

> One way Bob could have the job is if Joe had better ideas -- in the case Joe has mistakes in his thinking. That seems like it'd be compatible with the same system. If we're presuming Joe is rational, isn't that a somewhat high bar? I'm not sure everyone could measure up to it.

Yes but having better ideas is also in Joe’s interest. The problem here is that good ideas are hard to come by and people aren’t perfect, not that Joe prefers bad idea. So it’s not a conflict of interest. I also commented on this in #18324 which I don’t think you saw yet.

>> But that system would make everyone much worse off [...]

> I agree for lots of these possibilities. Systems that use violence to enforce rules on this sort of thing would be bad.

>> People commonly have mutual interest that something is decided by a certain method which has good traits like being fair, free or rights-respecting. That a particular outcome goes against me doesn’t mean it’s in my interest to change the system itself. With capitalist hiring, I’m much better off applying for some other jobs than living in a society without a capitalist economy.

> * this sounds like approximately: principles trump circumstance

> * * it's better to be working within a good system than profiting in the short term from a bad system, even if a circumstantial outcome is superficially less good for you.

How would Joe profit from a bad system?

If the system is e.g. you use bribes to get a job, then maybe he'd get this particular job (or maybe Candice or Dillon would get it, who knows). But he'd certainly run into the problem of "someone beat me out for the job I wanted" in a bribery-based system.

It's the same with a system of favors and friendships. It's hard for Bob to know he's the best connected applicant this time, even if he knows he has a stronger social network than Alice. And even if he would have gotten this job under that system, he'd miss out on others. It wouldn't solve the problem of Bob not getting every job he applies for.

Bob, if he's bitter, may not understand the purpose of having job applications. Why have more than one person apply for a job opening that available only to one person? The point is to try to use some objective tests to find a good candidate. If Bob doesn't want that to happen, then he's giving up on earning jobs by merit as a lifestyle. And he's imagining a world where, what, only one person is allowed to apply for each job? What's that even mean? The King just tells you what job you can have? Or first come first serve?

> I agree with: a world with short term good outcomes from a bad system is worse than a world with a good system.

I doubt that any of the general purpose systems like "bribery" or "favors" for assigning jobs actually would offer Bob all the jobs he wants. They might well fail to give Bob this particular job. They might well not only deny Bob this job but make it much harder for him to find an alternative one.

But those are generic, principled systems, even if the principles suck. What about a biased system? What about a system where Bob is in charge of everything? Would *that* be in Bob's interests? Should people want to be a king?

> Do you think there are any other methods by which jobs could be handed out? Does Joe having better ideas count as another method?

I don't know a better system than capitalism/freedom/property-rights/etc.


curi at 8:00 PM on October 15, 2020 | #18327 | reply | quote

#18327

>> I agree with: a world with short term good outcomes from a bad system is worse than a world with a good system.

> I doubt that any of the general purpose systems like "bribery" or "favors" for assigning jobs actually would offer Bob all the jobs he wants. They might well fail to give Bob this particular job. They might well not only deny Bob this job but make it much harder for him to find an alternative one.

> But those are generic, principled systems, even if the principles suck. What about a biased system? What about a system where Bob is in charge of everything? Would *that* be in Bob's interests? Should people want to be a king?

I think it's rational to want systems which can be agreed upon by everyone. Sort of like a 'lowest common denominator'. I don't think rational people want a system that's unfair--like Bob being in charge of everything.

I don't think people should want to be a king. One reason is that if I wanted to be a king, and was willing to do necessary things to achieve that, then I should expect other people to do so too. That just ends in violence, etc. Another reason is that if we were all kings it would be like having a billion city states, which would suck b/c we'd end up like subsisting.

There are reasons based on principles too, like being a king means using force to get your way, which is bad. But not everyone agrees on those. I think people more generally agree on practical stuff like 'if we all did that we'd all have nothing'. That's why I chose to write the two practical reasons.

>> Do you think there are any other methods by which jobs could be handed out? Does Joe having better ideas count as another method?

> I don't know a better system than capitalism/freedom/property-rights/etc.

I guess *all* other systems have to be better or worse than that. There's no orthogonal direction. I'm unsure if there are things to consider other than what we already did: stuff that looks like a conflict but isn't (e.g. Joe's ideas), and alternate systems for distributing jobs.


Max at 8:12 PM on October 15, 2020 | #18328 | reply | quote

#18328 If Bob wants to be King, then he isn’t concerned with mutual benefit. He’s creating conflicts of interests by pursuing policies to benefit himself at the expense of others. This will result in rebellion. It gives people incentive to kill, exile or imprison Bob. It gives people incentive to work against Bob, undermine him, and make his life harder. This is actually worse for Bob than peaceful, harmonious capitalism would be.

And if Bob is to be King, how will he achieve it? A violent revolution in which he might perish or be betrayed by one of his lieutenants who wishes to be King himself?

And if Bob is already King, how does he stay in power? Secret police? Dictators often die. It’s a risky job. And if one has the skill/luck/capability to win the contest for dictator, why not put those same energies into a business instead? Bob could have been better off as a billionaire than a dictator. In general, even when crime pays, it pays less than the market rate for all the work/skill/risk it takes. Because it’s easier to make a profit when you collaborate with people than when you fight with them. It’s easier to profit when other people’s actions are helping you and making you more successful than when their actions are working against you and subtracting from your success.

And being a violent dictator or criminal leader requires rationalizing that to yourself and thus alienates you from reason and good ideas.


curi at 8:12 PM on October 15, 2020 | #18329 | reply | quote

#18328

> Another reason is that if we were all kings it would be like having a billion city states, which would suck b/c we'd end up like subsisting.

I forgot to mention: the other option with all of us being kings is basically capitalism/freedom/property-rights/etc, anyway.


Max at 8:14 PM on October 15, 2020 | #18330 | reply | quote

I'm stuck on something. It's like there are two ideas that feel circular but the oppose each other.

I'm worried there's like a tautology / circular reasoning b/c of the 'rational men' thing. Wouldn't rational men always agree on things (eventually) anyway? So the system doesn't have anything to do with the lack of conflict. But people often aren't rational, so doesn't that mean there might be a system which is better than capitalism?

self-commentary: saying *people aren't rational -> there could be something better than capitalism* is circular b/c the idea of something being better than capitalism was the reason for saying ppl aren't rational.

(note: I'm not really sure this is circular but I'm getting too hung up on it)


Max at 8:29 PM on October 15, 2020 | #18331 | reply | quote

#18331 I don’t think the “rational” qualifier is required anymore than the sometimes-used “long term interests” qualifier. It’s in people’s best interests to be rational and to consider their long term interests not merely the short term.

The liberal claims re harmony of interests don’t rely on unlimited knowledge. They are not like “if men knew everything, there’d be harmony”. They are about avoiding conflict now. Understanding why you shouldn’t hate competitors for a job is achievable today given currently available knowledge.


curi at 8:29 PM on October 15, 2020 | #18332 | reply | quote

#18332

> #18331 I don’t think the “rational” qualifier is required anymore than the sometimes-used “long term interests” qualifier. It’s in people’s best interests to be rational and to consider their long term interests not merely the short term.

Yeah okay. The next thing I started thinking about was whether there was a conflict of interests between ppl who try to be rational but aren't perfect.

I'm not sure bringing systems in to the discussion is necessary to make the main point. Like: if you pursue rational choices then there aren't any deal-breaking conflicts you have with anyone else who pursues rational choices. That seems fairly self-evident.

> The liberal claims re harmony of interests don’t rely on unlimited knowledge. They are not like “if men knew everything, there’d be harmony”. They are about avoiding conflict now. Understanding why you shouldn’t hate competitors for a job is achievable today given currently available knowledge.

Hmm, maybe systems are necessary to bring in to it. Like if two people are pursuing rational choices but think there's a conflict, then there needs to be some rules by which they evaluate the situation. The system is like the equilibrium everyone can agree on, and since there's only one: it's special.

I'm not sure I'm properly understanding it, though.


Max at 8:36 PM on October 15, 2020 | #18333 | reply | quote

#18333 Can you come up with some other scenarios, besides competing job applicants, with some sort of apparent conflict of interest?


curi at 8:37 PM on October 15, 2020 | #18334 | reply | quote

Liberalism/capitalism allows people to live in a commune and share stuff if they want to. There are many rival ideas about the best ways to live in a peaceful world but those are sub-types of liberalism. The standard terminology is that liberalism is the system of peace and freedom, and its rivals are the systems that reject peace and social harmony in some way.


curi at 8:39 PM on October 15, 2020 | #18335 | reply | quote

#18334

> Can you come up with some other scenarios, besides competing job applicants, with some sort of apparent conflict of interest?

* one banana tree but two hungry people (and not enough bananas)

* multiple candidates running in the same election

* rich guy in a suit walking past drowning person (I'm not sure about this one)

* limited edition consumer goods

* competing for entry into a tournament (like the tetris world cup where the top 50 ppl go through)

* two kids who want particular gifts but their parents don't have enough money for both gifts


Anonymous at 8:43 PM on October 15, 2020 | #18336 | reply | quote

#18336 OK and can you provide solutions to those? Why isn't each one a conflict of interest?


curi at 8:43 PM on October 15, 2020 | #18337 | reply | quote

What do you now think of these scenarios? Got some solutions re them being potential conflicts of interest?

- We both want the same diamond.

- We both want the same computer.

- We both want to marry the same woman.

- We both want the same slot on the manned mission to the moon.

- We both want to be President (of the same country).

- We both want to be the top commander of the army.

- I want to speak my mind but you don’t like what I have to say and would prefer I shut up.

- I want to kiss you but you don’t want to kiss me.

- I sell printers and you sell printers and we’re competing for customers.


curi at 8:58 PM on October 15, 2020 | #18338 | reply | quote

try working on a discussion tree re conflicts of interest. you don’t have to include everything. you can pick important parts or paraphrase stuff if you want. or go through and do the entire discussion text. it’s up to you what you think would be useful.


curi at 8:59 PM on October 15, 2020 | #18339 | reply | quote

Initial answers to some conflicts of interest questions (TM#49)

also posted to: https://xertrov.github.io/fi/posts/2020-10-18-notes-on-conflicts-of-interest/

Can all these be resolved?

> - We both want the same diamond.

> - We both want the same computer.

> - We both want to marry the same woman.

> - We both want the same slot on the manned mission to the moon.

> - We both want to be President (of the same country).

> - We both want to be the top commander of the army.

> - I want to speak my mind but you don’t like what I have to say and would prefer I shut up.

> - I want to kiss you but you don’t want to kiss me.

> - I sell printers and you sell printers and we’re competing for customers.

## principle

Conflicts of interest (CoIs) seem to exist sometimes. When considering rational ppl or trying-to-be-rational ppl, those conflicts don't actually exist--they're illusions which can be resolved. They look like conflicts because we're ignoring the bigger picture. Ppl involved in the CoI shouldn't want to 'win' via a system which use force to get an outcome. They should want a system that's fair and works generally. A system with universality.

Systems which use force or unwritten rules are not preferable to free-market situations b/c they have adverse consequences outside of one's control (e.g. violence, 'winners' being decided by something like physical attractiveness or social status, etc). They outcomes -- when decided with alternative systems -- are worse for ppl involved. Reasons include: bad distribution of resources, outcomes being based on perceived problems that a person can't solve (e.g. not handsome enough), harm being done (e.g. violence), etc.

## We both want the same diamond.

Expansion of situation: we are both in a shop buying an engagement ring for our respective soon-to-be fiancées, and want the same diamond (diamond-A).

1. The initial 'solution' is that the shop sells diamond-A to whomever asks for it first. Person-A gets it. This is okay because both ppl can agree to a first-come-first-serve model (which is typical and expected).

2. Maybe person-B *really* wants the diamond. They can offer to buy it from person-A. This is okay because it's consensual trade where both ppl are better off.

3. Say person-A says they want to buy it but hasn't paid, but person-B has the cash now. The shop could work on a first-come-first-serve basis where the transaction is the important moment (who can pay first), so person-B gets it. this is an agreeable system.

4. Maybe there is another diamond (diamond-B) that one of the ppl is happy with, so person-A gets diamond-A, person-B gets diamond-B.

in each case an alternative system of distribution (based on attractive looks, or social status, or bribes, or whatever) is not preferable -- it's a worse society to live in.

## We both want the same computer.

Say it's a rare old computer so there's only one of them and it's not fungible. We can agree on a system which is fair, like an auction, and proceed on that basis.

## We both want to marry the same woman.

She should choose who she wants to be with (if either of us). We shouldn't want to be with someone who doesn't want to be with us (that would be bad for both me and her). We should both want her to be able to consider both of us. If I had an advantage (e.g. knew her earlier) and tried to stop her meeting you b/c I thought she'd prefer you, then it means I have to keep that effort up WRT you and any one else she might meet. So eventually I'd need to be coercive or forceful to do that. Hurting the person you want to marry is a shit thing to do (and a bad way to live long term), so I shouldn't want to prevent her evaluating other potential partners. I should actually be in favour of that because it means problems are apparent sooner rather than later. Living in a relationship where big problems *will* occur and that can't be resolved (e.g. she changes her mind about wanting to marry me) is bad for me, so if there will be problems I should want to know about them as soon as possible.

## We both want the same slot on the manned mission to the moon.

Say there are 3 crew slots and 2 crew members have been decided and are better candidates than us (at least for those slots, like the other crew have skills we don't).

### notes on alternatives to free-market / merit based judgement

- We shouldn't want to be chosen if that would jeopardise the mission -- it being successful is more important. we can agree that the most qualified person should be chosen, or the person otherwise chosen s.t. the mission has the greatest chance of success. Maybe we're equally qualified, though.

- We don't want a system where one of us is harmed (e.g. I hurt your family to keep you out of the mission). If I wanted that it could mean my family (or me) is hurt, which I don't want.

- We don't want the mission to be jeopardised for political reasons (or other parochial stuff), so we should be in favour of a selection criteria which is publicly and politically defensible (and just).

- We don't want a system where one of us is prevented from doing stuff in the future like other moon missions.

- We don't want a system where NASA (or whomever) regrets their decision (e.g. because it was made via nepotism or whatever).

- We don't want a system where we hate each-other because that could mean we can't be on the same future mission or otherwise end up excluded from other stuff.

### solutions?

- We can agree on a system based on merit

- We can agree on a system where NASA maintain a suitable body of astronauts (like a minimum number of astronauts kept in reserve), so some rotation is necessary (maybe one of us went on the last mission so the other should go on this one)

-- We can also agree on a system which takes into account future rotations, e.g. flip a coin and one of us goes on this one, and the other goes on the next mission

- We can agree on a system that doesn't bias one of us for external reasons like social status (if that happened, all missions would be worse off and have a lower chance of success)

operating under these sorts of systems is preferable to winning the slot under a different system. if it was some different system then how could we be confident that our crew is the best crew possible?

## We both want to be President (of the same country).

Note: curi and I sort of started discussing this at the end of *Tutoring Max #49*.

We should both be in favour of a good system for selecting a president. We can agree on important features such a system should have, like not favouring one of us. We should want a system where the victory conditions are clear and compatible with our values. We should want a system where we could lose b/c it's possible the other person is a better choice regardless of what we believe.

The conflict only exists when we have bad, irrational systems for choosing a president. If the system is bad then we can both agree changing the system is more important (and subsequently find a system which satisfies both our goals).

If there are other candidates, we should prefer those candidates who will institute a better system to those who won't. If there are perverse mechanics in the selection system (e.g. like those in first-past-the-post when you have 2 similar candidates running s.t. it *decreases* the chance of a favourable outcome) then we should both be in favour of cooperating to maximise the chance of one of us winning over bad candidates. We can find such a system.

We could also run a pre-election or something to decide which of us runs in the main election (similar to primaries in USA).

## mid exercise reflection

I worry that I'm missing something. Are these adequate answers? Do any of the apparent conflicts persist after what I've written?

I think these are hard problems to write about -- in some ways -- b/c there are always unknown and unspecified details which could be chosen to make the situation a 'nightmare situation' (as curi put it in TM#49).

Going to have a think and maybe come back to this later.


Max at 9:25 PM on October 17, 2020 | #18349 | reply | quote

Some thoughts on good/bad error msgs. I think they're important. I found a surprise overlap with *helping the best ppl or helping the masses*

context: I'm thinking about error feedback and how it affects groups of people / group efforts in a general sense, and I'm also thinking about the sorts of error msgs programmers get in the regular course of programming and how it specifically helps/hurts software projects. The thoughts below are a mix.

* are error messages a good way to organise issues? (e.g. in software dev).

* they have an important role: they guide ppl who know less (than the developers, or other community members, etc).

* if error msgs were a bad way to organise issues then there must be a better alternative system. what would such a system be like relatively speaking?

- it would put more burden on the ppl affected by errors b/c it's harder to know/learn how to report and solve the errors

- it would mean responsibility for the quality of error reporting would be shifted towards the shoulders of newbies

- such an alternative system would treat the relevant (preventable!) errors less seriously

* why could that be good?

- it'd mean there was a higher bar for engaging with top tier ppl

- it filters out ppl who are not able to understand the problem at least enough to figure out how to begin to deal with it

- if the best ppl don't know how to prevent relevant errors then isn't it better for them to focus on solving those problems rather than helping ppl who aren't as valuable?

* why could it be bad?

- higher bar to error correction -> less error correction

- easy to discourage ppl and end up reinforcing static memes / driving ppl away

- if the best ppl didn't know how to prevent the relevant errors then they end up working on the problem anyway; makes sense that there's an equilibrium here; after all, ppl are voluntarily participating on both sides.

relevant to:

- helping the best ppl or helping the masses

- error msgs and ~responsibility of senior members

- there is no one constant set of behaviours that makes sense WRT helping the best ppl vs masses, what matters is context. is it a good time to help one or the other? if lots of ppl have really bad ideas then it's probably worth helping the best ppl -- so we can find a good soln to that problem. conversely, if we don't have any great ppl at that time, or are otherwise short of *great* opportunities, then there's more utility helping the masses. there needs to be fertiliser for future generations, but also nourishment for current ppl in their prime. those *great* opportunities can be vicarious, ofc. Man's first journey to the Moon was a journey shared by a Nation.

- there's a big question raised by this: how should we react to *learning* of a great opportunity?

Finishing up: what happens if someone goes to an effort to make error msgs as good as possible?

- organisation gets better b/c the error messages are better suited to the associated errors

- it gets easier for ppl to help with / do error correction b/c the msgs/explanations match the contextually best ideas more closely and are more reliable to reason about.

- exponential/geometric increase in effectiveness of relevant key ppl -- their time can be better allocated, delegation gets easier, etc

- mutually beneficial for all parties. (note: this relies on the ability to improve error msgs and the right ~economic context to make it the easy choice. OTH I think that's reasonably common. most non-optimum situations don't hurt much and can be easily controlled via the 2nd-derivative (~acceleration). if there's a bit too much work on good error msgs then you can just reduce the hours per week by 10%; it can be gentle without much harm. the harm I mention here is wasted resources in a generic sense.)

## clarifying stuff

I didn't put a huge amount of thought into particular word choices because they felt difficult and I didn't want to ruin the flow. Here are some clarifications:

- *responsibility* as in *~responsibility of senior members*: i don't mean anything like an obligation, but if there was a clear moral decision then it'd line up with that.

- *2nd-derivative (~acceleration)*: controlling the rate-of-rate-of-change is useful if you want to control the outcomes of some (simple enough) system, and acceleration is a reasonably common way of talking about that.


Max at 3:37 AM on October 20, 2020 | #18365 | reply | quote

I edited the OP to add the Max tutoring playlist link https://www.youtube.com/playlist?list=PLKx6lO5RmaetREa9-jt2T-qX9XO2SD0l2


curi at 5:38 PM on October 22, 2020 | #18396 | reply | quote

some thoughts on project planning - Max

Context: project planning differences between doing projects yourself vs with a team

When I do projects myself, I work with a pseudo-JIT planning method. For the most part, the way I do prioritisation is based on immediate dependencies. I can also change focus with low overhead (like work on UI a bit, then backend, then UI, then backend)

Team projects are different. A lot more of the dependency graph needs to be defined upfront. There's a large overhead in transferring knowledge and changing who's working on what.

Does this difference matter for project planning? I suspect the methods to avoid fooling oneself mean the outcome is fairly similar. Like my naive method of JIT prioritisation leaves a lot of room to work on things that aren't that important -- to fool oneself.


Max at 2:39 AM on October 26, 2020 | #18505 | reply | quote

(goal, context) pair

https://curi.us/2390-what-to-sell-or-give-away

This is like a (goal, context) pair. it's not necessarily specific, like there's a lot that's implied or possibly known from wider context, but in essence it's a GC.

I think the idea of a partial IGC is nice and makes some sense. like the idea of 'when you've got a hammer every problem is a nail' is expressing an IC pair; there's a method and a context but no idea besides using the hammer like normal.


Max at 7:05 AM on October 29, 2020 | #18525 | reply | quote

harmony of interests - don't forget about unknowns

continuing harmony of interest stuff:

In the last tutorial (50?) I think curi and I talked about some harmony of interests cases.

I was playing the start of breath of the wild -- the old guy offers to trade you something if you to go into a dungeon and get the treasure. he then reneges on the offer but says he'll do it if you do 3/4 more dungeons.

why wouldn't it be in Link's or old-guy's interest to be violent? (besides the fact it's a game, etc)

with an alternate system, like violence / forcible redistribution, you can get hurt a lot more by unknowns. say old guy tries to jump Link and steal the treasures after he finishes all the dungeon. Well, if old guy didn't know any better then he'd like murder Hyrule's saviour.

This is pretty extreme as far as examples go, but I suspect there's a general principle. DD's morality conjecture seems like a good foundation (the conjecture being: the only moral imperative is not to destroy the methods of correcting mistakes. BoI - a dream of socrates). The idea of all parties agreeing on the methods of distribution / conflict resolution (and being rational, thus mostly honest) means none of the parties are aware of mistakes unique to that method.


Max at 7:17 AM on October 29, 2020 | #18526 | reply | quote

#18525 yeah you can brainstorm solutions to a goal/problem (or set of similar goals) in a context. you generally want a goal/problem before a solution. solutions/ideas in search of problems doesn't work well.


curi at 12:34 PM on October 29, 2020 | #18527 | reply | quote

in-progress idea on (in)dependent status of variables as yes/no property of explanations [Max]

at the end of tutoring max 51 I mentioned a partial idea I'd had about dependent/independent variables (or factors) in an explanation and whether we could like test for it. Since it's a yes/no property, if we could test for it then we might be able to use ideas around this to refute some class of explanations. IDK whether that will pan out, needs more work as per end of TM51.

(the following is unedited)

PS. I have been thinking about the titles of comments; particularly whether I should include my name when I post to max-microblogging. My RSS reader (a chrome addon atm) shows one big list and doesn't separate curi's posts from comments with titles (at least by default). If anyone else has a similar set up then adding my name to the title will make it easier to filter post/comments. IDK if I'll keep doing it. Feedback welcome.

(it doesn't bother me that the RSS reader doesn't separate posts and comments. I'm trying to read everything atm, though I do have like 100-200 unreads atm. Making some steady progress working through the backlog tho.)

---

is dependent/independent variable status a yes/no property of explanations? can we test for it?

inspiring thought: emulating a video game and selecting different options/hacks. are they dependent or independent? how would you test without knowing the explanations for what they do?

is there a way to test for dependence/independence of variables. e.g. when making any particular measurements?

- relies on (good?) theories of measurement

Educated guess: dependency between variables means most of the time things should look correlated. it takes very particular measurements or to be measuring particular behaviour to have something that *looks* independent. Also, independence between variables means most of the time things should look *not* correlated, and only particular, specially chosen inputs make it look dependent.

When an explanation defines some system, the factors / components at play will have some dependence/independence relationship. whether two variables are independent will depended on the minutia of the explanation, but should be yes/no. e.g. `f(x,y) = x + y` is dependent on both x and y. we can't change that by introducing more terms or by adding square roots or other things.

Do we have a way to test for dependence/independence? Is that even useful?

- How might it be useful?

-


Max at 9:14 PM on October 29, 2020 | #18533 | reply | quote

feedback request: "Why I Live" draft 1 [Max]

I'm requesting feedback for the following draft. It's not done, so I've left some of my own comments as quotes below -- just the most important stuff which came to mind.

My biggest concern is that someone might read this and feel bad about choices they've made which are actually okay. If you have ideas about how to avoid this or whether it will happen I'd like to hear them. This is written because it's how *I* see *myself*, not because it's how I think everyone else should see themselves and each other.

Some of the language is a bit fancy. I'll do more editing in future drafts to simplify it, but suggestions/crits are welcome.

The draft is as follows:

# Why I Live

Existing matters. Particularly: *our* existing matters. Existing--as a *human*--is special, unique, and full of potential. We are--or at least, can be--cosmically significant.

My choices--today and in the coming decades--must have *some* impact on humanity's future. My choices *today*, more than any other moment, have the greatest leverage in influencing the manner and magnitude of that impact. Tomorrow, tomorrow's choices will have the most leverage. If there are important choices to be made, perhaps I should pursue an understanding of them with some urgency.

Can my choices be so significant that they make an appreciable difference in the volume, quality, and proximity of important milestones in the future of humanity? That is, they contribute to a future with *more people* leading lives of *higher quality* (both std of living and std of ideas) and *achieving* important philosophical and technological *progress sooner*.

If one's choices could matter that much, how could one possibly *know* that? Humanity and the route we take into the future are, together, like a giant snooker table loaded with hyperactive billiard balls and a thousand collisions for each single choice any person ever makes. Unpredictable, chaotic, immeasurably complex. *Everyone* will have *some* choices they make that will impact our collective future in a significant way. However, most people will never know which choices were the special ones. Most people's choices--the ones that end up mattering--will be special by *luck*. Their other choices--the ones they *hoped* would matter--will end up swallowed and forgotten by the passage of time.

> I need to remember to answer the questions I set up here. Currently I don't actually answer them, though I intend to.

> The snooker table analogy is energetic (which I wanted) but doesn't flow super well and is maybe over the top or dominating the paragraph w/ it's length.

> I'm not sure if I should avoid answering 'yes' to the 'can my choices …' question. I think it might work better to answer (and explain) it later.

## Choices

Imagine yourself at the divergence of your futures.

To the left is a future of hit and miss choices, erratic legacy, and the fog of cosmic uncertainty. However, you can have a straight forward life. Your choices need not carry the epistemic burdens of depth, nor urgency, nor consequence.

To the right is *the alternative*.

Not everyone can take the right path. But some people can. Some people have. Maybe I can. And maybe you can, too.

Before you think about which path to attempt, *what would it mean to choose poorly?*

What if you chose the right path but you fell short? What difference would that make compared to choosing left? *At worst*, realistically, *there would be no difference*; or at least no difference worth considering here.

What if you chose the left path, but you could have chosen *the alternative … **and succeeded?*** Would your decedents look back and think "they should have known better?" Would there even be decedents to look back? What good purpose could there be in avoiding greatness? What hope does a future of greatness-aversion have?

Can choosing mediocrity be evil? Certainly not always. Maybe sometimes.

The reality is that you are presented, not with two paths, but infinitely many paths every moment of every day. You are not constrained to a dichotomy of greatness or mediocrity. You are the beginning of a new infinity -- if you choose to be.

> I think I'm mostly okay with this section -- at least on this pass. I'm not sure about the "Can choosing mediocrity be evil?" bit.

## What About Failure?

I used to be worried about failing. "Could it be that I will spend my life in vain pursuit of greatness and progress?" If the past half decade had been different, that might have become true. I know better, now.

Contribution to the future--to *progress*--is not zero sum. No thinker can contribute to progress if they are cut off from civilization. The credit for the impact of a great thinker's choices must be shared. As a light-cone constrains the the causes and effects of an event, an epistemic idea-cone constrains the prerequisites and consequences of a great thinker and their ideas. All those people inside the idea-cone share the credit. Without shoulders to stand on, an otherwise great man is blind. Without feet to stand on his shoulders, an otherwise great man is dumb.

The acts of pursuing, supporting, and nurturing greatness are noble and honourable deeds.

> I think I will expand on this last paragraph/line. It's okay to do 1 or 2 or 3 of those things. Arguably it's hard to avoid doing all of them if you spend time improving your thinking and know about FI / CF.

> This section feels like it might be a bit out of place. I like the middle paragraph. I wrote the middle paragraph first during brainstorming and then decided it needed a section. So I picked the title and wrote the first paragraph around paragraph 2. I like the last line, but I think it needs elaboration.

## So.

What is my choice? Do I choose a future where my choices matter? Yes or no?

I do. And I will continue to.

I refuse to retreat.

> I originally had paragraph 1 in the 2nd person, which might not go so well with the vibe. Originally:

> > What is your choice? Do you choose a future where your choices matter? Yes or no?

> I'm on the fence about stuff like "I refuse to retreat". On the one hand I think it has some of the weight I want, but it feels maybe a bit too close to like a war/violence theme. That can be inspiring for ppl sometimes ("never give up, never surrender" sorta thing), but maybe there's a better thing to say.

------

## some leftover notes

* end game for universe / civilization / intelligence?

** dunno, but ~100 years humanity debated whether other galaxies exist

** today, there are multiple conjectures--with disparate mechanics--accounting for the origin and nature of the universe (or at least some part of these things).

** what ideas will we have in another 100 yrs? 1k yrs? 10k yrs?

*** yeah, I'm not going to worry about the endgame. There's lots of time to figure that out. There isn't much time for some other things, though.

* i want to put together a philosophy intro pack for new software teams.

** I'm worried that most ppl would react badly to it if I just got a bunch of curi's essays together.

** but I think there's significant and important stuff to learn.


Max at 12:56 AM on October 30, 2020 | #18534 | reply | quote

#18534 I had an initial goal when I started this that I didn't mention. I intend to include the conclusion (at

the end or otherwise) that working on my writing is important and crucial. I'll integrate that on a future draft.


Max at 1:13 AM on October 30, 2020 | #18535 | reply | quote

Apparently post-mortems are common/standard in cyber incident response (sub-branch of infosec).

thinking about it now: I followed the gitlab outage in 2017 closely since I was using it at the time. They had a good post-mortem process from memory.


Max at 2:09 AM on October 30, 2020 | #18536 | reply | quote

#18534

> Not everyone can take the right path. But some people can.

What's the difference between people who can take the right path and people who can't?

> What if you chose the right path but you fell short? What difference would that make compared to choosing left? *At worst*, realistically, *there would be no difference*; or at least no difference worth considering here.

What does it look like to choose the right path? If it looks like spending all one's time on FI and ignoring one's spouse and kids, then that is a difference worth considering.

How do you tell which is the right path for any given choice?

It seems like you are facing a choice in your life, but it's not clear to me what that particular choice is about.


Anne B at 11:22 AM on October 30, 2020 | #18538 | reply | quote

#18536 Yeah post-mortems for computer security breaches or web service outages help convince ppl u maybe know what ur doing and won't have the same thing happen to you again. They can help restore/preserve trust. I think that might be why they're common as public blog posts.


curi at 1:46 PM on October 30, 2020 | #18539 | reply | quote

> PS. I have been thinking about the titles of comments; particularly whether I should include my name when I post to max-microblogging. My RSS reader (a chrome addon atm) shows one big list and doesn't separate curi's posts from comments with titles (at least by default). If anyone else has a similar set up then adding my name to the title will make it easier to filter post/comments. IDK if I'll keep doing it. Feedback welcome.

> (it doesn't bother me that the RSS reader doesn't separate posts and comments. I'm trying to read everything atm, though I do have like 100-200 unreads atm. Making some steady progress working through the backlog tho.)

Sounds like you should get a better reader.

Please don't put your name in comment titles just to try to hack how it shows up in a particular RSS reader that, for some reason, ignores the author field, and which may be fixed later and which maybe no one else is using. If you input bad meta data it makes stuff less future proof. It's harder to change things around because the data in some fields isn't what it's supposed to be. E.g. I might decide to display author names at the start of all or long comments, and then if you duplicate author name in title and author fields you could end up with that info twice adjacent.

Author names already works in other readers. The data is in the feed fine already. E.g.:

And there are separate feeds for posts and comments so idk why they're merged for you.


curi at 6:47 PM on October 30, 2020 | #18547 | reply | quote

#18547 yup, good points. Will fix stuff on my end.


Max at 7:53 PM on October 30, 2020 | #18548 | reply | quote

#18538 Thanks for feedback Anne.

> What's the difference between people who can take the right path and people who can't?

In hindsight `can` is the wrong word. I changed the verb a few times in that sentence -- initially it was 'might', i.e. "Not everyone might take the right path. But some people might." I think 'might' is better than 'can' but still not happy with the phrasing. it's still a bit diff to what I wanted to say there.

> What does it look like to choose the right path? If it looks like spending all one's time on FI and ignoring one's spouse and kids, then that is a difference worth considering.

I don't think it needs to be anything hard-and-fast like that. There are lots of options.

One way to look at it is: early life is super important for ppl b/c so many ideas and behaviours get adopted/set at that point. I think one reason I did so well at maths in highschool was that I'd been introduced to good ideas about the world being consistent, explicable, etc, and some good idea about problems, challenges, and that perseverance helps break through intuitive barriers and stuff.

Is raising a kid really well a good thing to do? Yeah. it can be super valuable. It's also a task that's not something we get for free along the way (just getting better at philosophy and learning in general won't automatically make you a good parent, effort on TCS etc still needs to be done). So progress can be made in parenting (like TCS was) that's a direct philosophical contribution. In some ways I guess that for some ppl at some point "spending all one's time on FI" can actually overlap like 80%+ on doing parenting and family stuff.

> How do you tell which is the right path for any given choice?

IDK about generally handling specific options -- well I have ideas but I'm not confident about them yet. I was trying to get at the idea that there's a quality to good paths to take through life.

> It seems like you are facing a choice in your life, but it's not clear to me what that particular choice is about.

Maybe, there are a few options for things that come to mind here, but I think that might be a bit of a distraction WRT the essence of the article. Or maybe I can look at my prioritisaiton problems as that choice -- that's what I was thinking about before I started writing the post.


Max at 6:51 PM on November 1, 2020 | #18555 | reply | quote

reflections on 'Why I Live' draft & idea in general

Some reflections on #18534

## title

The title ("Why I Live") is at worst dishonest and at best inaccurate. In reality my actions don't always line up with what I said in the post or the things I was planning to add in future drafts. A better title would be something like "what I want to align my life to". It was dishonest b/c I was essentially claiming to make better choices than I do. Some of my choices line up with the post, but lots don't. Not just in minor ways, but like major ways. Why procrastinate from ~everything to binge a game for like 20 hours? That's choosing the crappy path.

## context before writing

I was thinking about what curi and I had talked about in tutoring 51 + earlier stuff about goals. I realised that I hadn't really written much about my goals when curi and I discussed that topic (2 sessions mostly I think). What I'd done was closer to writing my goals down as opposed to writing about them. The 'why I live' post is closer to talking about my goals, but there's a lot of implicit stuff. I think the implicit stuff (which relates to me personally) is fine because the post was meant to be more general than about my specific goals, more like framework/context stuff that informs what goals I think are good or not and why.

I was thinking about being unhappy with not writing, and about my goals and prioritisation etc. I had some thoughts like 'why be unhappy with it, you could work towards that now', and 'write about goals instead of just writing them down'. I started brainstorming and came up with the title and stuff later.

## misc

### emotions

I had some anxiety when I checked the next morning for posts on curi (because I hoped there'd be some feedback) - even before knowing whether any were in microblogging. That was unexpected and notable.

I wonder if there's some interaction going on between:

- my desire for a life with meaning / greatness

- fear of failure / mistakes (i'm not sure how much I feel this but it's a pretty common static meme)

- the idea of choosing to fail (deliberately not trying) instead of trying and failing

- my desire for lacking responsibility sometimes -- it's not persistent but I sometimes want it so much that I renege on commitments or withdraw and ignore msgs and things. That behaviour doesn't go beyond reason but is definitely to some extent.

I was also surprised when I thought that the post might be dishonest/misleading. Part of me wasn't, but part of me was because a lot of the stuff in the post (or stuff that's implied by or implicit in it) are things that I want to believe about myself.

#### insight?

If I believe things about myself that are better than the situation in reality actually is, does that inhibit error correction substantially? Can it hide blockers that would otherwise be more apparent and easier to fix?

I feel 'yes' is the answer for some things.

Obvs thinking e.g. I'm 50th percentile playing poker when I'm actually 35th percentile isn't going to matter much, esp compared to thinking I'm 99th percentile but actually 90th percentile (which would have potentially *big* consequences for variance, consistency, and volume of earnings).

In this case I think it might be an issue b/c thinking I make better life choices than I do (and do so more consistently) can mean I am overconfident, which could have consequences like starting big projects too early / without enough planning / etc.

Maybe, then, this 'insight' is just like another aspect of overreaching and Oism?

### 'negative' feedback?

I'd like to know if anyone thought it was low quality, overreaching, vapid, dishonest, etc.

I don't think FI ppl would have avoided telling me this in abstract, but I could see ppl not providing feedback if they thought like 'there are so many issues it'd be effort to know where to start' or the like.

I think the dynamics between curi and I might be different b/c of the tutoring sessions -- like anything that needs to be discussed could be done there. I don't know how/if this would affect something in particular, but it occurs to me now & might be relevant.

### writing

on the whole I didn't think the writing was too bad considering it was a draft. There are some things that I don't think are clear enough from what I wrote; e.g. intended audience. I wonder if I might write a less clear post if there are conflicting goals for the post. e.g. I wanted to write something for me (to spur myself on, as such), and I also wanted to write something that could become enduring (after more drafts) and would be useful or general in a philosophy-and-life type category. Maybe I'd do better by separating the two; writing at all helps me some, and I can do some of the more self-centric stuff in private journaling or in the brainstorming phase and copy those bits elsewhere s.t. they wouldn't be in the main post.

I also wonder if I'm overreaching at this stage trying to create anything that's enduring. I suspect there's not much I can do with a low enough ER s.t. it lasts decades.

Actually, there's lots I could do like that, just not stuff that I'm interested in doing. e.g. I could write a guide on how & why to tie your shoelaces could be enduring on the scale of decades. i think I might do myself a service by lowering those standards for a while. I can always revisit them later and I don't need to have them to actually produce enduring work (you can always go above your own standards).


Max at 7:56 PM on November 1, 2020 | #18556 | reply | quote

I'm thinking of unendorsing ~everything I've written

Unendorsing ~everything would basically mean that I say: from some date (eg 3rd Nov 2020) it's not safe to assume I endorse/believe/etc anything I said publicly before that date.

It'd be like a reset on things. Partially that's b/c there are a lot of unresolved things (e.g. the FI discussion on Flux from 2017) which I think are not worth resolving from that point. There are also things I've said that I don't want to go back and try to change b/c that sounds like a lot of work. I don't have high enough confidence in what I've said to want to leave stuff about without knowing what it was.

I'd make a post on my blogs but wouldn't go to much effort outside that now. I'd point ppl to those posts when necessary and then address stuff on a case by case basis to update stuff I've said publicly.

I think this sounds like a fair thing for someone to do very infrequently, maybe only once.

I think this would be a good thing to do because I think my ideas and my self have changed substantially in the past few years and especially the past few months.

I also think that, from this point out, keeping things mostly up to date (or at least updatable) is something I could do. So doing a mass unendorsing would let me keep a higher quality library of criticism.

A mass unendorsing would also be a decent start to a library of criticism; it indicates a relevant discontinuity and means the stuff I write after that has special considerations the earlier stuff lacked.

If I intend to only do this once then I should treat it fairly seriously.

I'm looking for feedback.


Max at 7:34 AM on November 2, 2020 | #18560 | reply | quote

Reach + parameterisation of reach

I wrote this as part of a post on reach + IGCs. This was 95% of the post at the point I copied it out; it seems better as a stand alone post.

I think reach is parameterised over infinite domains. I explain what I mean by that below. I don't know if this is right, but **I would be surprised if it were wrong**, so I would like to know if anyone disagrees.

---

When an explanation has reach, it means the explanation is general over many problems or situations. The exact nature of this generality is particular to the explanation.

One example of reach is theories which are time-independent and/or space-independent; maybe a theory works everywhere and at all times (at the beginning of the universe, and now, and we expect it will in 100 billion years and onwards), like general relativity or quantum theory; or maybe a theory works regardless of the time of day ('if I dial the emergency number someone will pick up, but only if people commonly have and use phones', 'leaving food on the stove will heat it, then cook it, then burn it, and then set fire to it')

Reach can be mundane and fantastical. All explanations have some level of reach.

It's hard (or impossible) to compare explanations' reach without a common phenomena to use as a basis. Does a theory of housing in post-GFC North America have more or less reach than a theory of sodium's role in the ionosphere? I don't know if that question has an answer. What about newtonian gravity and general relativity? It's reasonable to say that GR has more reach because GR explains phenomena which newtonian gravity does not, *and the reverse is not true*; GR is an explanatory superset.

Reach has a size, but it's hard to be exact besides 'zero' and 'infinite'. It doesn't really make sense for reach to be ever be completely zero (b/c the explanation would not account for anything). If an explanation has little reach then it only accounts for very specific things (it's parochial). Explanations can also have unbounded reach. Sometimes it might seem like explanations have near-unbounded reach (e.g. newtownian gravity seemed unbounded *except* for the orbit of Mercury). In reality, we can't reliably tell the difference between 'near-unbounded' and 'parochial' without a superior understanding of what's going on (which requires additional explanations).

Reach is parameterised over infinite domains. That's because those domains correspond to--a least--levels of emergence. Some domains are subsets/supersets of other domains, but they can be incomparable too. Example: natural selection has some domain like 'all life on earth with a genome' (it might be even more specific than this, though). natural selection also has some domain in time and space; we expect natural selection will work in many other contexts too, like some alien life, provided it meets certain conditions. It's possible that alien life might not meet those conditions, like hyper-advanced bioengineered AGIs. It's not clear how natural selection would work here, and we can recognise that b/c we know something about the bounds of it's reach. We can reason about what domains of reach an explanation has by exploring the explanation (thoughts, experiments, predicts, etc).

We don't care that explanations have zero reach in some domains. General Relativity doesn't really have anything to do with housing prices in the USA. So even though GR has some universal reach WRT time and space (AFAIK; galaxy rotation rates aside), it has 0 reach WRT housing prices. (It does have reach when it comes to houses themselves; you still experience gravity when building houses and when inside houses etc, and need to take it in to account.)

If an explanation has reach over *some complete domain* we say it *has universality*. I don't know if the domain needs to be infinite, but it seems like many important universalities have an infinite domain. Some universalities are special and some seem like they're not special. Some important domains of universalities are: all matter at all points in space and time, all computable programs, all programs, all real numbers, all people, all alphabets, all ideas, etc.

curi once said:

> X is a universal Y if it can do any Z that any other Y can do.

I could say: This turing machine is a universal computer because can compute any program that any other computer can compute. The universality is *all computable programs*.

Or: Paths Forward is a universal discussion methodology because it can lead to a successful conversation for all conversations that any other discussion methodology can lead to success in. (I don't know if this is actually true yet, but I suspect it is and would be surprised if it were false)

---

# short reflective post script

I *think* I have used terminology correctly but there are parts I'm not certain about. An example: When I talk about domains and universalities I'm reasonably confident the terms are close-to-right (or actually right / okay / clear), but wouldn't be surprised if there were minor issues. I would be surprised if there were major issues.


Max at 8:48 AM on November 2, 2020 | #18561 | reply | quote

> The universality is *all computable programs*

I should have said "the universality is over all computable programs". There might be some other similar typos (I didn't edit much)


Max at 8:55 AM on November 2, 2020 | #18562 | reply | quote

difference between actual and future people

I think there's a difference between people who actually exist and people who will exist.

You can harm both, and the rights and life of both matter. However, I don't think future people can be thought about as individuals. Like you can *collectively* harm future ppl, but you can't harm an *individual* future person.

This is relevant for stuff like abortion, AGI, thinking about one's choices and what to do in life.


Max at 5:49 PM on November 3, 2020 | #18567 | reply | quote

I posted this to #low-error-rate on discord: https://discordapp.com/channels/304082867384745994/667162736970432522/774921042820464681

---

One of the most important skills for life is good self-judgement. Having good self-judgement means that you're able to tell when you're prone to making mistakes (and what they are) or when it's safe to be confident in doing something with a low error rate.

Learning efficiently requires a cycle of: do it once, do it consistently, do it fast/cheap/autopilot. This becomes very important because learning is an incremental process, so your learning history affects your learning future.

If your learning cycle is incomplete then you won't create a solid foundation for future knowledge. You need a solid foundation because new knowledge/skills will compound errors in the foundation. Example: if you have bad fine motor skills then you will have a lower limit on how fast you can type; errors in precise finger movements (or precise-but-slow finger movements) disproportionately affect your typing speed (compared to precise-and-fast finger movements).

Being able to judge your own learning cycle requires good self-judgement. Without good self-judgement you can't learn quickly and efficiently. Without good self-judgement you will end up making more mistakes than you would do otherwise. This error rate becomes an upper ceiling on your progress. Since error rates compound as you learn new things, it is possible to get stuck. At that point you stop being able to reliably make progress. To solve this you need to go back to earlier (more foundational) topics and complete the learning cycle.

If you plan to learn without outside assistance, you should do your best to ensure that your self-judgement is consistent enough and cheap enough for the topics you want to learn about.

Goal of the above post: I wanted to practice doing something that I could submit to #low-error-rate. I also wanted to write something which is useful for other FI ppl, and which helps me understand topics like learning and overreaching. I think I succeeded. However, this took ~50 minutes to write and edit, so I have not reliably completed step 3 of the relevant learning cycle(s).


Max at 1:00 AM on November 8, 2020 | #18579 | reply | quote

> Example: if you have bad fine motor skills then you will have a lower limit on how fast you can type; errors in precise finger movements (or precise-but-slow finger movements) disproportionately affect your typing speed (compared to precise-and-fast finger movements).

I found an error w/ the noun phrase "errors in precise finger movements (or precise-but-slow finger movements)". It's a noun phrase b/c it's the subject of the main verb 'affect' in that clause. It sounds like I'm saying: errors in [precise or precise-but-slow] finger movements. This is because I use two incomparable nouns (those being errors in finger movements and finger movements). This error also affected the sentence later on where I say "compared to precise-and-fast finger movements". This doesn't strictly make sense b/c the subject of the clause is errors. I expect that--given this particular error--most people would do background error correction to fix the sentence or would just gloss over it.

I would fix the clause so that the subject is [errors or successes when doing X]:

Example: if you have bad fine motor skills then you will have a lower limit on how fast you can type; errors in precise finger movements and successful precise-but-slow finger movements will disproportionately affect your typing speed compared to successful precise-and-fast finger movements.

Revisiting the sentence now, I would consider replacing 'finger movements' with 'key strokes' but I think that it's not a big deal and doesn't significantly impact the post.


Max at 1:41 AM on November 8, 2020 | #18582 | reply | quote

Some policies I am thinking of writing

- policy on social engagements

- policy on gifts and gift giving

- policy on birthdays, holidays, and celebratory events

These are all things on which I have unconventional views (some developed more recently than others). Writing said policies (and accompanying explanations) will help me make sure I'm clear about my ideas. Because those ideas are somewhat unconventional I will have errors that could be avoided if I used the corresponding traditional ideas instead. They're also low-stakes topics, which is good for practicing writing and writing policies. I should practice writing policies before trying to write important policies like a debate policy.


Max at 2:12 AM on November 8, 2020 | #18583 | reply | quote

a danger of unconventional ideas on academia

One of the dangers of having unconventional views on academia is that you can end up in a situation where you don't produce a track record of thinking / ideas / research efforts. Academia and the associated journal-article norms is the primary method ppl use to establish a track record of serious research and having/presenting pioneering ideas.

I think rejecting uni culture is one reason I have ended up writing so little up till now; I had issues with some aspects from the start but I got more critical over a few years until I dropped out. (Tho I've been tempted twice by offers of a shortcut to postgrad stuff: first a masters of geoscience and second a PhD in compsci focusing on blockchain stuff. There are circumstances where you can do postgrad stuff without an undergrad degree -- basically if someone will vouch for and supervising you. Skipping the BS was one reason I considered it more fully.)

I tried to do some writing after dropping out but never found it easy and I wanted talk about topics beyond my writing skill. I don't think doing the normal undergrad-to-phd-to-research thing would have fixed all those issues (I'm still critical of academia), but I would have ended up with a track record at least.

I think my lack of a track record has been significantly detrimental to me more than once.


Max at 5:56 AM on November 8, 2020 | #18584 | reply | quote

#18579

I've got some questions:

1. Is being able to tell when you're prone to making mistakes the same thing as having good self-judgment or is it a type of good self-judgment?

2. What do you mean by "Being able to judge your own learning cycle"? Judge what about your learning cycle?


Anne B at 5:45 PM on November 8, 2020 | #18587 | reply | quote

#18587

I'm a bit tired writing this, so I wouldn't be surprised if there were some small mistakes, but I spent some time going over it so I'm confident there's no big mistakes.

> 1. Is being able to tell when you're prone to making mistakes the same thing as having good self-judgment or is it a type of good self-judgment?

I think it's a sub-skill of good self-judgement, sort of like a component. There are some edge-case situations though, like when you're new to something. Those situations might not need the full 'good self-judgement' skill b/c there's like no part of it you should be confident in. But in general good self-judgement is more than just being able to tell when you're prone to making mistakes.

Another way to think of it is: it's a type of *self-judgement*. having *good self-judgement* means being proficient at *a few key types of self-judgement*.

I think the next answer might help clarify things.

> 2. What do you mean by "Being able to judge your own learning cycle"? Judge what about your learning cycle?

I will answer this Q then reflect on it.

Answer:

To have an efficient learning cycle *without help* means you need to be able to make 2 types of decision well.

The first type is decisions about focus. "Which things should you focus on and what stage of the learning cycle should you focus on for each?"

The second type is decisions about when to change focus, for both topics and methods.

Particularly, you want to make good decisions/judgements about when you can transition from step 1 to step 2, and from step 2 to step 3, and when you're done. If you don't make good decisions then you either move on too soon and end up with compounding errors, or you move on too late and waste time (or maybe get bored with the topic in general). I think the first problem is much more common than the second.

Doing each learning step well means you'll probably use a different method for each step. So you have to choose to stop doing method 2 and start doing method 3 at some point, and if you make a better decision there then you'll have better results w/ learning.

Both of those decision types (what to focus on and when to move on) are dependent on self-judgement. Good self-judgement means you'll be able to make better decisions. There might be other contextual factors too, but good self-judgement is *always* a factor.

Being able to make those decisions well is what I mean by "being able to judge your own learning cycle".

Reflection:

I considered if "Being able to judge your own learning cycle" was vague when writing/editing the post. I chose to keep it like that b/c I thought it was clear enough and b/c writing less means both finishing sooner and fewer chances for mistakes to occur.

My answer to (2) is pretty long. In part that's b/c I wanted to be extra clear but it also shows how much I left unsaid.

I think it might have been a minor mistake to omit more details when I considered if the idea was vague. I don't think it was a big mistake b/c it was easy to fix.

That said, what are the expectations we should have around mistakes that arise due to audience mismatch? It's really hard to write for super broad audiences, so I should expect some background miscommunication with like ~most audiences, right?

I think I need to understand more about audiences, miscommunication, and stuff about expected (or acceptable) errors and their properties (like their magnitude and type).


Max at 11:45 PM on November 8, 2020 | #18591 | reply | quote

#18579

What question(s) do you have for us? Are you asking if we think this is low error rate? If so, what kinds of errors are you interested in knowing about?


Anne B at 7:48 AM on November 9, 2020 | #18593 | reply | quote

Brief thoughts on different types of errors (learning / overreaching)

(note: this is a draft I wrote yesterday. I think I was intending on adding more. That said, it's a complete idea as it is and relevant to my reply to #18593, so I decided to post it now.)

wrt learning and overreaching: sometimes we'd be surprised by some errors but not others. I'm not surprised by subtle grammar mistakes in stuff I write, but usually the topic isn't about grammar. So I can sometimes be confident of a low error rate in the content but not in the grammar. If there's a difference in my expectations on which errors would be surprising, I should say so and commit before errors are found.


Max at 3:47 PM on November 9, 2020 | #18596 | reply | quote

#18593

Note: I wrote this faster than I might otherwise b/c I'm short on time today.

> What question(s) do you have for us?

This is a good question. I don't have an answer ready. (BTW Anne, thanks for #18587)

Here are some questions for you and FI ppl generally I put together while writing the rest of this post:

- Do you think #18579 contains any significant errors? Does my self-judgement idea make sense and seem reasonable?

- Is the self-judgment idea important enough for ppl to read/know?

- Are there any new ideas in #18579 or is it all derivative of stuff curi has said? I'm not sure about this one (partially b/c I haven't read everything curi's written).

Lesser or more general qs:

- Does this overlap significantly with any of curi's previous posts on learning? I searched for "self-judgement" but didn't find much.

- Has curi written a post with the 3 steps of learning? I had them noted in one of my early tutorial notes and curi mentioned it in one of his newer podcasts (maybe the *sense of life* one). I didn't find anything when searching tho.

> Are you asking if we think this is low error rate?

I don't think I explicitly have asked that, but I am interested in criticisms / conjectures of errors.

I guess I sort of expect that if I claim something is #low-error-rate and someone on FI disagrees then they'd say so. That's not a safe assumption/expectation though. Also, just b/c ppl don't spot errors at the time doesn't mean there aren't errors.

> If so, what kinds of errors are you interested in knowing about?

#18596 has some related thoughts I had yesterday.

In general I'm interested in all errors, but I'm *particularly* interested in errors w/ content (like in the idea/explanation itself), and errors in clarity. That said, While I mentioned the goal of #18579 I didn't mention things like what learning I was focusing on particularly.

Like #18582 is still good to know (grammar issue / minor clarity error), but I'm not too worried about that error -- it's not a bottleneck.

I think we should (as like students of philosophy) be explicit about the things we're focusing on and stages that we're at (e.g. learning reports might be a good place for that). Then the errors we should be interested in are more obvious.

- Errors in the foundation (b/c they compound); if they exist it means we've moved on too soon from something, or forgotten to learn something, etc, so these are v. useful to know about.

- Errors in stage 1 that prevent us from doing something at all; these can be hard to spot ourselves.

- Errors in stage 2 (consistency); corrections here can be high-yield, like getting the right technique for doing something. If we had a bad technique we might get consistent but not get fast.

- Meta-errors: stuff like being bad at grammar/writing, they are generally inhibitive and have a lot of reach.

I'm not sure about errors in stage 3; like we should want to know about them, but those errors won't have as much reach, I think, and having gone through stages 1 and 2 means that we're in a pretty good position to spot those errors ourselves and probably have some ideas on how to solve them.


Max at 4:07 PM on November 9, 2020 | #18597 | reply | quote

betting strategy

I had this thought after the US election:

The payouts for bets on Trump/Biden winning varied throughout the election, and was >2 for both of them at some point. If you can place any two bets (one for each candidate) where the payout is >2 then you guarantee profit (overheads aside). You should balance the bets on each side too.

Payouts being >2 mean that you get more than $2 back for a bet of $1.

Events with *sufficient variance* (or uncertainty) sound like they should generally work with this strategy.


Max at 6:06 PM on November 9, 2020 | #18598 | reply | quote

Okay, here are some comments about content and clarity in #18579. Keep in mind that I’m not skilled at writing essays and I don’t think I could write a good several-paragraph essay.

- It seems like the main idea of the essay is that good self-judgment is important for learning. But learning isn’t mentioned in the first paragraph. I think your first paragraph should have your main idea in it.

- I think a better main idea would be something like “being able to judge your error rate is important for learning”. The essay isn’t really about being able to judge other things about yourself, just about judging when you are likely to make errors.

> One of the most important skills for life is good self-judgement. Having good self-judgement means that you're able to tell when you're prone to making mistakes (and what they are) or when it's safe to be confident in doing something with a low error rate.

- I asked my first question in #18587 because in your second sentence, “means” can be read two different ways. The sentence could mean “having good self-judgment implies that you’re able to tell…” or “having good self-judgment is defined as being able to tell…”

- Re my second question in #18587, what do you think of writing something like “being able to judge where you are in a learning cycle requires good self-judgment“ instead of “Being able to judge your own learning cycle requires good self-judgement”? I think that’s what you’re getting at.

- I don’t think the example of fine motor skills and typing is a good one here. People don’t have to learn fine motor skills in the abstract in order to learn to type fast. People can learn the fine motor skill of fast typing directly.


Anne B at 6:59 AM on November 10, 2020 | #18602 | reply | quote

This seems interesting - epistemic use of category theory?

> Category theory offers a unifying framework for information modeling that can facilitate the translation of knowledge between disciplines.

> Category Theory for the Sciences is intended to create a bridge between the vast array of mathematical concepts used by mathematicians and the models and frameworks of such scientific disciplines as computation, neuroscience, and physics.

https://archive.org/details/cattheory


Max at 7:16 AM on November 10, 2020 | #18603 | reply | quote

*the choice*

I finished *The Choice* finally.

I liked it and recommend it. I found some bits less engaging, but new ideas are introduced all the way through. It's worth reading in full.

I think there are a few things in the book that are misconceptions that have been improved upon. Like in Ch5 there's the idea that the belief in the inevitability of conflict prevents ppl from progress. This sounds like it conflicts with BoI's 'problems are inevitable'. I think a better way to put Goldratt's idea is: the belief that the solutions to emergent conflicts are unattainable or nonexistent holds ppl back. These sort of things are not a big deal tho. Goldratt's ideas are quite good and I didn't have much trouble aligning them with CF ideas when those sort of things came up.


Max at 8:56 AM on November 10, 2020 | #18604 | reply | quote

#18602

> Okay, here are some comments about content and clarity in #18579. Keep in mind that I’m not skilled at writing essays and I don’t think I could write a good several-paragraph essay.

Thanks :). FYI I found I was able to improve relatively quickly with some practice (or at least that's how it feels). I'm not sure what you've tried or how important it is to you. I'm happy to help or give suggestions/feedback and things if you think that would be good.

> - It seems like the main idea of the essay is that good self-judgment is important for learning. But learning isn’t mentioned in the first paragraph. I think your first paragraph should have your main idea in it.

I was initially confused till I went back and checked. The first sentence is "One of the most important skills for life is good self-judgement." I agree that I don't explicitly mention the self-judgement <-> learning connection in the first paragraph, but I think I could modify the first line with the following to fix things: "... important skills for effective learning (and life in general) is good self-judgement."

>> One of the most important skills for life is good self-judgement. Having good self-judgement means that you're able to tell when you're prone to making mistakes (and what they are) or when it's safe to be confident in doing something with a low error rate.

> - I asked my first question in #18587 because in your second sentence, “means” can be read two different ways. The sentence could mean “having good self-judgment implies that you’re able to tell…” or “having good self-judgment is defined as being able to tell…”

Ahh, I think I understand, maybe. I read the sentence just now with two different groupings:

- having X means you're able to tell (when Y or when Z)

- having X means (you're able to tell when Y or when Z)

I'm not sure the 2nd way is grammatically valid b/c I have a "when" on both phrases either side of the "or".

I can sort of see the difference between implies and defined as, but not exactly sure I see it here. I think I might be able to avoid this sort of ambiguity anyway, generally.

other thoughts on this q:

FTR I meant: having X implies you can tell (when Y or when Z).

I don't know if I should use "and" here instead of "or". I used "or" b/c I think it goes with "when". If I used "and" instead, I would replace "when" with "the times when".

The "or" version means approximately "you can tell between these situations" and the "and" version means approximately "you can identify both of these situations". The different is subtle, but maybe it's important here. The "or" version (as I've put it here) sort of sounds like only those two options are possible.

---

Comments/Qs I'll respond to later:

(I'm a little short on time but wanted to post something)

> - Re my second question in #18587 [...]

> - I think a better main idea would be something like “being able to judge your error rate is important for learning”. [...]

> - I don’t think the example of fine motor skills and typing is a good one here. [...]

Your qs and comments were helpful btw. I don't know when I would have noticed the life/learning thing in the first line for example.


Max at 9:28 AM on November 10, 2020 | #18605 | reply | quote

running communities based on ideas - note to self

> In general, if someone knows a mistake you're making, what are the mechanisms for telling you and having someone take responsibility for addressing the matter well and addressing followup points?

> Do you have public writing detailing your ideas which anyone is taking responsibility for the correctness of? People at Less Wrong often say "read the sequences" but none of them take responsibility for addressing issues with the sequences, including answering questions or publishing fixes if there are problems.

https://curi.us/2065-open-letter-to-machine-intelligence-research-institute

When you start or run a community based on ideas you need to take responsibility for issues with those ideas. You shouldn't hand over the reins to someone if they can't fulfill that role. For an enduring tradition to be created around ideas there needs to be a stalwart to take responsibility. Progress and persistence is hard without that. If you hand over to someone who falls short then the tradition can die, or at best will have a discontinuity (something you can't bank on). Examples: Karl Popper didn't hand over well (AFAIK) and there was a discontinuity ending with David Deutsch, Ayn Rand handed over to (I think) Binswanger and Peikoff and they didn't rise to a necessary standard. Both Popper and Rand did a lot while they were alive; enough to give their ideas a good chance of surviving. Maybe they could have done better? It's hard to say b/c where were the ppl who understood greatness enough to support Rand, work with her, carry on the tradition, and ultimately grow to exceed her? It seems selling millions of books didn't really help with *that* too much. You should make sure that the ideas you promote are written down and endure. You can't rely on suitable ppl finding you during your lifetime.


Max at 10:32 AM on November 10, 2020 | #18606 | reply | quote

What does it mean to choose to be a heroic achiever?

Philosophy who needs it. CH 2 or 3 (CH 3 in audible book). 26:50 / 35:30

I'm open to any discussions or suggestions on this topic.


Max at 2:06 PM on November 10, 2020 | #18607 | reply | quote

#18607 one part is not being second handed


curi at 2:24 PM on November 10, 2020 | #18608 | reply | quote

#18605

>>> One of the most important skills for life is good self-judgement. Having good self-judgement means that you're able to tell when you're prone to making mistakes (and what they are) or when it's safe to be confident in doing something with a low error rate.

>> - I asked my first question in #18587 because in your second sentence, “means” can be read two different ways. The sentence could mean “having good self-judgment implies that you’re able to tell…” or “having good self-judgment is defined as being able to tell…”

> Ahh, I think I understand, maybe. I read the sentence just now with two different groupings:

> - having X means you're able to tell (when Y or when Z)

> - having X means (you're able to tell when Y or when Z)

> I'm not sure the 2nd way is grammatically valid b/c I have a "when" on both phrases either side of the "or".

I don't see the difference between these two groupings. I'm not saying there isn't one, just that I don't get it. That's not what I was getting at.

> I can sort of see the difference between implies and defined as, but not exactly sure I see it here. I think I might be able to avoid this sort of ambiguity anyway, generally.

> other thoughts on this q:

> FTR I meant: having X implies you can tell (when Y or when Z).

A way to reword it to make the implication clear is "If you have good self-judgment, then you'll be able to tell when..."

> I don't know if I should use "and" here instead of "or". I used "or" b/c I think it goes with "when". If I used "and" instead, I would replace "when" with "the times when".

I did not see this point at first, but now I do.

> The "or" version means approximately "you can tell between these situations" and the "and" version means approximately "you can identify both of these situations". The different is subtle, but maybe it's important here. The "or" version (as I've put it here) sort of sounds like only those two options are possible.

I've learned from FI to see subtle differences in wording as important. Which one did you mean to say? Do you want to say that only those two options are possible?

A possible "or" rewording that's wordy but more clear: "... you're able to tell if you're in a situation where you're prone to making mistakes or in one where you can be confident that you'll have a low error rate."

A possible "and" rewording: "... you can identify both where you're prone to making mistakes and where you're unlikely to make mistakes."


Anne B at 3:12 AM on November 11, 2020 | #18613 | reply | quote

>> I don't know if I should use "and" here instead of "or". I used "or" b/c I think it goes with "when". If I used "and" instead, I would replace "when" with "the times when".

> I did not see this point at first, but now I do.

>> The "or" version means approximately "you can tell between these situations" and the "and" version means approximately "you can identify both of these situations". The different is subtle, but maybe it's important here. The "or" version (as I've put it here) sort of sounds like only those two options are possible.

> I've learned from FI to see subtle differences in wording as important. Which one did you mean to say? Do you want to say that only those two options are possible?

I'm not sure how much difference there practically is between the two in this case -- I don't think it's particularly significant either way. That said:

I wanted to say that those states are mutually exclusive and cover the full range of situations. Good self-judgement can reliably tell which state you're in for a given situation.

So I meant the *or* version. You're either prone to making mistakes, or you're not and can be confident.

And yes, I did want to say those two options are the only options possible.


Max at 5:02 AM on November 13, 2020 | #18626 | reply | quote

Choices Matter (reflection)

I've thought for a long time that choices are epistemically significant; even before I had a good idea of what 'epistemically' means. One of the first ways I remember thinking about this was wrt determinism and free-will. My idea was roughly: if a person *chooses* to believe in (and thus have) free-will, then they do have it, and if they choose not to, then they don't. I don't think that's strictly correct, but I do think there's an essence of truth insofar as the believe that your problems are soluble is required to seek solutions. If you don't seek solutions to your problems then your life will be ruled by static-memes and other things. Those ideas take away your control and autonomy over your life (at least in particular partial ways).

I think Rand agrees that choices are epistemically significant in some way. From *Philosophy: Who Needs It* (p 45, kindle edition):

> A man does not have to be a worthless scoundrel, but so long as he chooses to be, he *is* a worthless scoundrel and must be treated accordingly; to treat him otherwise is to contradict a *fact*. A man does not have to be a heroic achiever; but so long as he chooses to be, he *is* a heroic achiever and must be treated accordingly; to treat him otherwise is to contradict a *fact*.

I'm pleased that there seems to be some convergence between what Rand has said and my draft 'Why I Live' post (#18534), esp considering I wrote my draft before starting *Philosophy*. I think that's probably due largely to learning from curi and consuming his content. (e.g. repeatedly returning to think on the topic 'helping the best ppl or helping the masses'.)

Note: I'm unsure about my honesty with this next paragraph. I didn't want to cut it, though, in case anyone has some criticisms of it, or has suggestions of things to consider when one is self-doubtful in this way. I'm using a blockquote to signal that it's different from the rest of the post.

> I've repeatedly thought about whether and how choices matter, and one reason for that is my enduring dissatisfaction with how I've been treated (particularly wrt academics or 'intelligence'). I've largely been treated well, and often preferentially, going back almost as far as I can remember. My current explanation for that dissatisfaction is that I think my success has had more to do with my choices than innate ability. I don't know how early that started, though. I have an example about explicitly choosing to change my attitude and approach to an aspect of life from when I was 13, so I think I must have had some important ideas before that.

I still think choices matter, and also that the choice to believe *choices matter* is epistemically significant.

There's a deluge of bad ideas waiting to flood one's mind if one doesn't take one's choices seriously.


Max at 5:57 AM on November 13, 2020 | #18627 | reply | quote

convergence in some of Rand's ideas and rational/static memes

> Kant originated the technique required to sell irrational notions to the men of a skeptical, cynical age who have formally rejected mysticism without grasping the rudiments of rationality. The technique is as follows: if you want to propagate an outrageously evil idea (based on traditionally accepted doctrines), your conclusion must be brazenly clear, but your proof unintelligible. **Your proof must be so tangled a mess that it will paralyze a reader’s critical faculty**—a mess of evasions, equivocations, …

Philosophy (p. 140). Penguin Publishing Group. Kindle Edition. (Emphasis mine)

This converges w/ the definition of anti-rational/static memes:

> Static (aka anti-rational) memes *disable the holder's creativity* to prevent criticism of themselves. They are not adapted to be useful, but block effective thinking about that.

https://curi.us/1824-static-memes-and-irrationality


Max at 6:20 AM on November 13, 2020 | #18628 | reply | quote

Can collaborative writing be used as a good learning tool and error correction process?

I was reading the FI thread "Reading Until the First Error" and start to consider writing a post on the topic. Then I wondered what would happen if a few FI ppl chose a common topic and wrote collaboratively on that topic. Would a better post be produced than if any one of those ppl attempted it individually?

It's unlikely the final post would be near identical to any individual author's attempt unless that author were a lot more knowledgeable on the topic.

If none of then authors are an outlier like that, and if the authors have paths forward, then the collaborative post should be better than any individual attempt. Also, the process of resolving disagreements (done to completion) should mean the authors sort of 'sync up' all their relevant knowledge, which would make writing it a good learning exercise, too.

I'm not sure how this would work in practice, but I'd be interested to find out if anyone else is interested in doing something like this (I am).

There are some possible issues, like if there's a significant mismatch between the authors that could mean the process is inefficient. Or if the topic is too advanced the process could fail but hide sources of error. Topic choice is pretty important, then.


Max at 10:19 AM on November 13, 2020 | #18630 | reply | quote

I am interested in trying this.

It sounds difficult. The co-authors would need to come to agreement on topic, on what to say about the topic, and on the writing. I like the idea of talking with other people about exactly what to say and how to say it.


Anne B at 8:45 AM on November 14, 2020 | #18656 | reply | quote

Do you know of examples of competition between static memes?

I'm curious about any examples of competition between static memes, particularly b/c I want to know how they interact.

curi:

> Static (aka anti-rational) memes *disable the holder's creativity* to prevent criticism of themselves. They are *not adapted to be useful*, but block effective thinking about that. Their focus is on making the host unable to reject the meme.

curi explicitly qualifies the criticism that's prevented with "of themselves", which is pretty important for static memes if they're competing. It's an advantage if a static meme can avoid preventing (or even encourage) criticism of a competing static meme without compromising the suppression of criticism aimed at it.

I think that's a easy-to-miss point about static memes and maybe a good reason to use the term `static meme` over `anti-rational meme`.


Max at 11:10 PM on November 14, 2020 | #18663 | reply | quote

Confession as a static meme

I think the idea of something being confessional might be a static meme. I've never done catholic-style confession but I have had some experiences I'd describe as confessional. I think it's a way to avoid completely dealing with problems. Maybe it's not a full static meme in it's own right (static companion maybe?) but I'd be surprised if it were unrelated.

It's different to honesty, too. You can have an experience that's both confessional and dishonest.

I've thought about this a bit b/c I think I went through a pseudo-confessional thing early on in tutoring. More generally, I think it might be easy for ppl to mistake some of the Oist mindset/integrity/secondhandedness stuff in a way that produces ~confessional behaviour/experiences.


Max at 11:33 PM on November 14, 2020 | #18664 | reply | quote

Hooking - Learning Technique

Hooking is a useful technique for learning.

The name is taken from programming; a "hook" is the relationship and entry point for new functionality. That functionality is added after the original software is written (usually by a different programmer).

Hooks are common in programming. In user interface frameworks like React or Vue, hooks to lifecycle events can be made. Those hooks let the programmer add code that runs before or after meaningful things happen, like before the page is loaded, or after all the assets (like images) have been displayed on the screen. When using Git, hooks like "prepare-commit-msg" and "post-update" allow programmers to run scripts before and after some of the steps in Git's procedures.

In programming, the hooks that are chosen are at important breakpoints. Roughly, they are the last and first moments around a significant event. There are guarantees about things that have happened and things that will (or might) happen -- something possible only because significant breakpoints were chosen.

When learning, we can do something similar, where we can introduce ideas based on noticing some event. Adding hooks to certain actions, thoughts, etc allow us to introduce ideas at that specific point, without the overhead of trying to keep the idea in mind.

One example is forgetting to use the word 'that'. Maybe you notice [that] you sometimes omit 'that' and you think that your writing is less clear because of it. If you choose a breakpoint like 'when proofreading I needed to reread a sentence', you can hook the idea 'check for missing implied words'. You're not constantly looking for implied words, but you are ready to add that behaviour when you hit that breakpoint.

The words 'breakpoint' has a meaning in both programming and CR, and I want to clarify that I'm using it in the CR sense. It aligns fairly intuitively with the programming sense, but with programming you act on a breakpoint differently (e.g. debugging via stepping through code and looking at memory). That idea is not specific enough.

In CR, a breakpoint is a boundary on a spectrum (or pseudo-spectrum) which allows you to make yes/no judgements about IGCs and to effectively deal with continuous/unbounded data.

With hooking, the spectrum* that these breakpoints segment* is your thoughts and actions, the idea is the relevant methods of feedback and error correction, and the goal is to avoid the relevant error and to continue avoiding it.

*: 'Spectrum' and 'segment' are (I think) specific to 1d shapes. You'd probably need more than one dimension to represent all possible thoughts and actions. However, that distinction isn't really relevant here; the principle is the same.

It's usually easy for programmers to make good guesses about which hooks are useful (and thus good guesses for which breakpoints are useful). The important thing is that the programmer has good explanations for why hooking here in this context is worthwhile. Sometimes there are explanations with reach, like why hooks between initialization and runtime logic are useful.

It's harder to tell where hooks should go (where to put breakpoints) when it comes to learning. To make good judgements about where hooks go requires that you have a good explanation. That, in turn, requires you understand how to detect, correct, and ultimately avoid those errors. This means hooking is hard to use well when you're just starting to learn a topic. to judge that you need enough knowledge about both learning generally and the topic that you're studying. This means it's good for stage 2 learning (consistency).

Making a bad judgement about where hooks should go can have high cost. One reason to use hooks is that you want to conditionally introduce behaviour with a high overhead. That behaviour doesn't need to be high overhead on its own; maybe it's only high overhead when you have 3 other ideas in mind that you're focusing on. If you're triggering a badly placed hook (at a bad breakpoint) then you will often have the overhead of considering error correction that doesn't actually help. If you don't realise that the hook is badly placed quickly then the cumulative cost can hold your learning back.


Max at 5:38 AM on November 15, 2020 | #18666 | reply | quote

#18656 @Anne, I'm thinking about this (like how to do it, how to choose a topic, where to discuss stuff, etc).

I was a bit stuck on thinking about topics. We'd need to have a topic where we didn't have a serious knowledge gap -- otherwise it's more like one person writing and one person editing. That sounds like it's (generally) a hard thing to find. (Luckily?) I think we both have a similar level of knowledge about collaborative writing.


Max at 5:53 AM on November 15, 2020 | #18667 | reply | quote

considering a project: a series of posts on learning.

I'm considering a new project: writing a series of blog posts about learning. One goal is that I could, if I wanted, turn that series into a book later (if it was good enough, substantial enough, etc). Making a profit (e.g. by selling a book-book) is not a goal.


Max at 5:58 AM on November 15, 2020 | #18668 | reply | quote

#18656

If you didn't see https://curi.us/2390#8 btw, I mention the idea of buying a topic thread on curi.us to organise discussion about this.

> I am interested in trying this.

Cool. I don't really mind how long we take as long as we get to some form of completion. ~failure is okay if we do a proper postmortem.

> It sounds difficult.

Somewhat yeah. I can think of some issues we might run in to. Even things like synchronisation (like one person being a bit more inspired and doing a lot) could become an issue. That's probably not what we should worry about tho.

> The co-authors would need to come to agreement on topic, on what to say about the topic, and on the writing.

Sounds like pre-plan, plan, and execution.

I don't mind the idea of doing a post on collaborative writing. Maybe that's not a good idea for a first go at it though. Do you have any topics in mind?

I've been quite interested in learning lately, and I'm planning to write more about that. It's hard to tell if we'd have similar levels of knowledge on relevant topics there, though.

I'm not actually sure that we need similar levels of knowledge. One reason I thought that is a thought experiment I considered. e.g. "What would the end result be like if curi and I tried to write a post? Would it be different to a post that curi would write alone? Maybe, but would the differences be significant? Probably not."

But there's another case, too, which is where two ppl each have more knowledge about different, particular (sub-)topics that are relevant for the post. Like I think you've read a lot more than I have about TCS, and I have done a lot more programming. What would it be like if we tried to write a post about teaching your kid to program? Would we even be able to write a good article? IDK yet.

> I like the idea of talking with other people about exactly what to say and how to say it.

Me too. I am going to try to pay attention to my mindset in those discussions; I'd like to know how it differs from my typical mindset when writing a post or having a discussion.


Anonymous at 6:44 PM on November 15, 2020 | #18679 | reply | quote

#18679 Note: forgot to add my name to this post.

My normal procedure for posting here, based on some of the things Alisa wrote on this topic, is: fill in topic if relevant, fill in name, start writing, if the post get's too long then copy to a text editor and copy back when done.

The problem with #18679 was that I didn't do my normal procedure. I clicked 'reply' on #18656 first, then decided I wanted to quote instead so clicked quote. After that I just started writing. I ended up copying that to a text editor and copying back once I was done. Then I clicked 'post message' like my normal procedure.

If I introduced a 'check fields' step before submitting I could have avoided this mistake.


Max at 6:47 PM on November 15, 2020 | #18680 | reply | quote

I was thinking about my "Why I Live" draft and other titles that would fit the content better without dishonesty around my mismatch with how I want to live and how I do live.

One title I considered was "Why Live?" and noted that my answer to that q would be very different to the content in "Why I Live". But I think questions like "why live?" are ones that a philosopher should have a good, *considered* answer to. (Another question a philosopher should have an answer to: what are the questions a philosopher should have an answer to?)

This reminded me of https://direct.curi.us/2093-a-discussion-of-steven-pinkers-enlightenment-now-the-case-for-reason-science-humanism-and-progress

I didn't realise the relevant section in *Enlightenment Now* is v. similar until checking just now.

The following quotes are in the PDF attached to the linked blog post, tho I am copying them here too.

The extract from *enlightenment now*:

> But the most arresting question I have ever fielded followed a talk in which I explained the commonplace among scientists that mental life consists of patterns of activity in the tissues of the brain. A student in the audience raised her hand and asked me:

> “Why should I live?”

> The student’s ingenuous tone made it clear that she was neither suicidal nor sarcastic but genuinely curious about how to find meaning and purpose if traditional religious beliefs about an immortal soul are undermined by our best science. My policy is that there is no such thing as a stupid question, and to the surprise of the student, the audience, and most of all myself, I mustered a reasonably creditable answer. What I recall saying—embellished, to be sure, by the distortions of memory and *l’esprit de l’escalier*, the wit of the staircase—went something like this:

>> In the very act of asking that question, you are seeking *reasons* for your convictions, and so you are committed to reason as the means to discover and justify what is important to you. And there are so many reasons to live!

>> As a sentient being, you have the potential to *flourish*. You can refine your faculty of reason itself by learning and debating. You can seek explanations of the natural world through science, and insight into the human condition through the arts and humanities. You can make the most of your capacity for pleasure and satisfaction, which allowed your ancestors to thrive and thereby allowed you to exist. You can appreciate the beauty and richness of the natural and cultural world. As the heir to billions of years of life perpetuating itself, you can perpetuate life in turn. You have been endowed with a sense of *sympathy*—the ability to like, love, respect, help, and show kindness—and you can enjoy the gift of mutual benevolence with friends, family, and colleagues.

>> And because reason tells you that none of this is particular to *you*, you have the responsibility to provide to others what you expect for yourself. You can foster the welfare of other sentient beings by enhancing life, health, knowledge, freedom, abundance, safety, beauty, and peace. History shows that when we sympathize with others and apply our ingenuity to improving the human condition, we can make progress in doing so, and you can help to continue that progress.

curi commented:

> Pinker gives AD HOC answers to the most basic moral philosophy questions, b/c he is no expert on the matter, but a public faker. he admits this to open his book.

I was part of the conversation that comment is from back in 2018, and I'm surprised by how little I remembered about Pinker's writing. It is *full* of social dynamics. (Under 'social dynamics' I'm counting the dishonesty, fancyness, impressive words, concept-dropping (staircase wit in french), kantian flavour, etc.)

Aside: studying curi's *Grammar and Analyzing Text* course on gumroad + analyzing lies + puppet strings content is how I learned to see this stuff; covered in & around tutoring max #28 particularly.

I did remember that curi pointed out it was an *ad hoc* answer, though.

I don't want to be like Pinker. His answer is fancy, impressi-confusing*, and not considered. I want my answer to be "bold and simple" and something I understand *really well*. I don't need to understand the topic really well before I start, I just need to keep things in a draft state and accept criticism till I get there.

*: IDK if 'impressi-confusing' is a good word or not, but no word comes to mind (besides 'kantian' maybe) for writing that *projects* that it's well thought out but is actually hard to parse and understand properly. 'Academese' is similar but a bit different.


Max at 7:28 PM on November 15, 2020 | #18683 | reply | quote

> I didn't realise the relevant section in *Enlightenment Now* is v. similar until checking just now.

I misspoke here. I meant: I didn't realise the relevant question from this section ("why should I live?") is v. similar to the title I was thinking about ("why live?").

I was focusing a bit on quoting text from curi's pdf and pinker's ebook at that time, so I think I just changed context too much while writing and dropped some words. I sometimes switch context like that in the middle of a sentence; that will definitely increase my error rate unless I go back and do editing later (which is higher effort than doing it well the first time for simple stuff like this). I could either finish the sentence then go do the thing (like copying quotes) or when I return to a half-done sentence I could re-write the entire sentence. I think the second option might be a bit better for the end result. That said I should remember to think about the overhead of switching like that sometime and check it's not like much higher than I estimate.


Max at 7:42 PM on November 15, 2020 | #18684 | reply | quote

collaborative writing project

#18679

I think it would be fine to do it on your site if you are willing. Or we could do it by email or Discord or something and then post the whole thing somewhere. I don’t want to pay to put it on curi’s blog.

I don’t have topics in mind. I suggest, since this is an experiment in collaborative writing, that we pick a topic that’s easy and not try to say too much about it this time. Then we could focus better on the writing and the collaboration.

I too am interested in learning. I’m also interested in your other recent topic of why to live. So maybe a smaller topic in one of those areas would work.

About TCS, most of the reading I did about it was on the TCS list, which was of variable quality, and it was around 20 years ago and I don’t remember it much. My memory is that it was mostly about everyday parenting issues and not about learning.

Your original post about this (#18630) said “a few FI people”. Is anyone else interested?


Anne B at 5:29 AM on November 16, 2020 | #18688 | reply | quote

#18688

> I don’t want to pay to put it on curi’s blog.

I'm curious about this. Why do you say that?

I wasn't expecting to share the cost btw. I'm not sure if that was clear or not. If you also didn't think we were sharing the cost, though, then wouldn't it make sense to think I would pay for the whole thing? Qs that come to mind: do you disagree with *me* paying for it? or maybe with having the discussion in a venue that cost money? would it make a difference if curi created a suitable topic without being paid?

What do you think?

> we pick a topic that’s easy and not try to say too much about it this time.

I think this is a good idea; I was ready to consider topics that I would consider writing about alone. The problem with that is those topics are *already* near the edge of my limit. Since I don't have much practice doing collaborative stuff, the only safe thing to presume is that a collaborative effort is also near the edge of my limit. Doing a complex topic, then, means I/we would risk overreaching or at least making EC harder.

When I read this idea earlier today I immediately thought about my writing practice during the tutorials. At the time I was a bit reluctant to write about shoes and swimming and things, but it was really valuable b/c it meant a bunch of issues got teased out in a manageable way.

There are a ~dozen left over writing topics I didn't do in some of my tutorial nodes. Do any of these seem like good topics to you or give you ideas for topics? They're at the bottom of: https://xertrov.github.io/fi/notes/2020-08-12/#previous-possible-exercises---not-done-was-previous-homework

> Your original post about this (#18630) said “a few FI people”. Is anyone else interested?

I'm not against more ppl getting involved if they'd like to (esp if it turns out to be valuable), tho I do think it'll be easier with fewer people to start with.


Max at 8:55 PM on November 16, 2020 | #18695 | reply | quote

collaborative writing project

#18695

It's fine if you want to pay to put the collaborate writing discussion somewhere. I just don't want to pay anything because there are free places we could put it that seem fine to me.

Some of the topics you list would require substantial research for me (maybe not for you) and some wouldn't. All would require some thought about what kinds of things to include in the writing.

I think we should get our working space set up first and then start talking about topics and how we'll go about collaborating

I've been checking this thread once a day. I'll start checking it more often.


Anne B at 3:23 PM on November 17, 2020 | #18703 | reply | quote

re: collaborative writing project

#18703

> I just don't want to pay anything because there are free places we could put it that seem fine to me.

Oh yeah ofc.

> I think it would be fine to do it on your site if you are willing

I don't mind, though I'm developing a new site ATM that's more like curi.us in terms of functions/features. If we use my site I'd like to avoid doing too much before I swap over (which helps avoid migration concerns). The new site is not a static site like my current sites are; having a static site makes doing comments more difficult.

> I've been checking this thread once a day. I'll start checking it more often.

Cool. Keep in mind our TZ differences (i'm +11 atm; otherwise +10); I think we're about 8 hours apart. I'll also set the title as I did here for related comments. I'm busy the next 24 hrs or so.


Max at 8:36 PM on November 17, 2020 | #18709 | reply | quote

I asked patio11 a question (very soon after he'd posted the tweet) and he replied: https://twitter.com/XertroV/status/1328963125262102531

patio11 said:

> [... snip first few tweets in thread ...]

> As somebody who routinely has meetings at 7 AM minutes after waking due to the unending tyranny that is time zones, I would toggle a "Edit my face to look healthy and rested" setting in Zoom in a hot second, if for no other reason than to have less inquiries about health status.

max said:

> Why don't you turn your webcam off instead?

patio11 said:

> Tough for that Schelling point (everybody on camera or nobody on camera) and important, for interpersonal and organizational reasons, that I be seen (literally and figuratively) as a full participant in meetings affecting me/my projects/etc.

I don't buy the Schelling point thing but mb that's just a culture/perspective difference.

max said:

> Makes sense. Particularly for interpersonal/org stuff and senior/leadership/etc roles.

> I think *most* ppl should just turn off their webcams, though (and that the cultural *expectations* around keeping webcams on are bad).

(thread ends there)


Max at 6:26 AM on November 18, 2020 | #18713 | reply | quote

re: collaborative writing project

#18709

@Anne, I thought we might be getting stuck and listing current potential problems seems like a good way to avoid that (or start at least).

- A place for a dedicated discussion. I sort of expect it to be high volume. Do you also expect that?

- Topic choice -- there's plenty of stuff that comes to mind about what *I* personally want to write about, but I get stuck suggesting something. Maybe we could each brainstorm some topics that are simple that we don't know too much about. Then we could swap lists/trees and pick ones that sounded good from each other's list? That should work as long as we end up with 1 or more topics.

- We might have an issue with latency and work volume? Like if we have trouble being synchronous (there aren't many good overlapping times I suspect) then slow / low volume messages means writing will take longer. Smaller and easier topics will help get to a result faster and move on to a new topic (repeating the cycle as many times as we like).

- Maybe the potential size/difficulty is a bit intimidating? I want to do it because I think I can learn something about learning and/or discussing. I also want to make progress in both those things regardless, so I'm slightly worried that maybe it's inefficient. An easier topic will help there. Also, nothing bad will happen if we pick something too easy.


Max at 7:45 AM on November 18, 2020 | #18714 | reply | quote

Max at 8:04 AM on November 18, 2020 | #18715 | reply | quote

In highlights 4, curi talks a bit about learning philosophy and helping the world (particularly w/ philosophy). At 11:15 he says (my words) that mistakes in philosophy impact your philosophy progress a lot and mean your efforts aren't very productive -- I think there's an implication that this is not the case for most/all other things (which makes sense to me).

I think these 2 reasons contribute:

- philosophy is not very mechanical. it's not like getting better at replicating something straight forward like practicing speedrunning tech.

- philosophy has like high inertia or poor/slow feedback. stuff like speedrunning / maths / coding all have very fast feedback cycles: once you 'finish' you learn whether you were successful or not fairly quickly. philosophy is like the other end of the spectrum, though, and errors can go a really long time without being noticed.

#learning


Max at 8:36 AM on November 18, 2020 | #18716 | reply | quote

Max at 8:36 AM on November 18, 2020 | #18717 | reply | quote

postmortem on onerednail.com

I've started writing a postmortem on my onerednail.com thing.

The problem seems fairly simple now:

on the site I say something like: ideas matter, and that wearing a red nail polish on a particular finger is the symbol I'd chosen for that. particularly I say:

> To be a *responsible* thinker requires accepting this, because without doing so you would deny yourself the most powerful method of *error correction* we have. We must *live the consequences* of our ideas and morality, strive for their betterment, and understand the consequences of the alternative.

the problem is that this does not line up with behaviour. if I value philosophy, why have I not been writing and learning and doing any of it? I *thought* I was, but it was superficial. what I *should* be doing (living the consequence of my ideas) is dedicating time and effort and things to actually doing philosophy.

> The red fingernail is -- for me -- a dedication to those ideas and values.

Not anymore. Now it's more of a reminder about *faking* that dedication.

> It is a declaration of responsibility, and a desire to accept it.

It was an *abdication* of responsibility, and a desire to *believe* I accepted it, even if that was because I was *fooling myself*.

(Minor grammar mistake: I say the red nail is a desire, where I should have said "a symbol of my desire" or something like that.)

*Was it bad to do it though?*

First, I don't think it was *wrong* to do. Like I didn't think it went against any of my principles, it was at least the *claim* that philosophy was important - which is good - and it didn't hurt anyone.

The idea has serious problems, so it's bad in that sense.

However, it did succeed at some things. This one particularly:

> It is a reminder of the importance of philosophy and epistemology in daily life, [...]

It did do this. I paused for thought and considered things more deeply than I would have otherwise. It was a direct part of some significant events b/c *I was bothered when I thought my actions didn't line up with the symbol.* There would have been some significant thoughts I woudln't have had, and actions I wouldn't have done, if I decided not to do it in the first place.

For that reason *I don't regret the mistake, and I would be happy to make the same mistake if I didn't know better.*

----

This was meant to be a gist-icle (short/summary-ish) but it's longer than I anticipated. I'm going to post to FI about it b/c I think it's important enough. I'm not sure if I'll think of more to add, though.


Max at 8:55 AM on November 18, 2020 | #18718 | reply | quote

Rand and the gold std?

In *Philosophy* Rand says:

> Money is the tool of men who have reached a high level of productivity and a long-range control over their lives. Money is not merely a tool of exchange: much more importantly, it is a tool of saving, which permits delayed consumption and buys time for future production. To fulfill this requirement, money has to be some material commodity which is imperishable, rare, homogeneous, easily stored, not subject to wide fluctuations of value, and always in demand among those you trade with. This leads you to the decision to use gold as money. Gold money is a tangible value in itself and a token of wealth actually produced. When you accept a gold coin in payment for your goods, you actually deliver the goods to the buyer; the transaction is as safe as simple barter. When you store your savings in the form of gold coins, they represent the goods which you have actually produced and which have gone to buy time for other producers, who will keep the productive process going, so that you’ll be able to trade your coins for goods any time you wish.

Philosophy (pp. 153-154). Penguin Publishing Group. Kindle Edition.

I think she's mistaken about using gold as money in some way. Particularly:

> Gold money is [...] a token of wealth actually produced.

I disagree. It's production and value aren't connected like I think she's implying they are / should be. Gold could do as good a job at being regardless of whether it's dug out of the ground or given to us by aliens (provided it gets distributed at the same rate). Maybe I misunderstand her?

Anyway I stopped at this point b/c I wanted to write more thoughts, but I think it's better to have lower standards for this sort of 'getting stuck' and just post something basic. Noting it publicly is enough to get me unstuck (keep reading) and I can figure out this misunderstanding in parallel.


Max at 10:40 AM on November 18, 2020 | #18719 | reply | quote

She also says

> To fulfill this requirement, money has to be some material commodity which is ..., rare, ...

I think "rare" is the wrong word. "Scarce" is better. I don't think this represents a major error on her (or my) part tho.


Max at 10:43 AM on November 18, 2020 | #18720 | reply | quote

idea for learning post: Gem of Seeing

Gem of Seeing is a D&D item that gives the user Truesight. That let's the character see things like secret doors, spirits, magical items, etc. The point is it let's you *see things that are really there, but you otherwise can't see*.

hypothetical:

* say you're making major mistakes about learning

* particularly that you don't see lots of errors your making -- they're really happening but you can't see them.

* what would life be like with a gem of seeing (for learning) vs without?

* if you *didn't* have a gem of seeing and didn't know you were making mistakes, how could you tell?

* you'd *believe* you *weren't* making the mistakes. reality tells you that, right?

* but you would be, and they'd still be a constraint.

* this is the case for ~everyone.

* now imagine you find a gem of seeing, and suddenly you can see all these mistakes your making.

* you can see other ppls mistakes too, but they don't believe you if you tell them.

* how much of a difference would that make to your life?

* could you afford to ignore the possibility?

Being bad at learning (and evasive/dishonest about it, too) is like never having a gem of seeing.

Being really good at learning, and thus philosophy, is like having the gem of seeing. In reality your 'Truesight' skill builds up incrementally though; it's not all in one go like the gem.

#learning (i'm tagging stuff to make it easier to find later)


Max at 11:07 AM on November 18, 2020 | #18721 | reply | quote

re: collaborative writing project

#18714

>@Anne, I thought we might be getting stuck and listing current potential problems seems like a good way to avoid that (or start at least).

> - A place for a dedicated discussion. I sort of expect it to be high volume. Do you also expect that?

High volume is okay. Maybe somehow that will push me to think and write faster.

You're working on a place for a dedicated discussion, right?

> - Topic choice -- there's plenty of stuff that comes to mind about what *I* personally want to write about, but I get stuck suggesting something. Maybe we could each brainstorm some topics that are simple that we don't know too much about. Then we could swap lists/trees and pick ones that sounded good from each other's list? That should work as long as we end up with 1 or more topics.

Okay, I'm willing to do this. But see below about topics.

> - We might have an issue with latency and work volume? Like if we have trouble being synchronous (there aren't many good overlapping times I suspect) then slow / low volume messages means writing will take longer. Smaller and easier topics will help get to a result faster and move on to a new topic (repeating the cycle as many times as we like).

Actually, we seem to have a good chunk of overlapping time. You're still posting things now as I write this and I've been awake for nine hours. Should we set up times to work together? Should we do some voice chat as well as text?

> - Maybe the potential size/difficulty is a bit intimidating? I want to do it because I think I can learn something about learning and/or discussing. I also want to make progress in both those things regardless, so I'm slightly worried that maybe it's inefficient. An easier topic will help there. Also, nothing bad will happen if we pick something too easy.

I too am hoping to learn something about learning and/or discussing, and also about writing.

I think we should aim first for something too easy. Maybe we should first look for easy topics that we already have something to say about, not topics we'd have to research.


Anne B at 11:31 AM on November 18, 2020 | #18722 | reply | quote

re: collaborative writing project

> Actually, we seem to have a good chunk of overlapping time. You're still posting things now as I write this and I've been awake for nine hours.

This isn't necessarily typical. It's 7.42 am for me atm. I'm usually waking up about now.

> Should we set up times to work together? Should we do some voice chat as well as text?

Yeah, that sounds like a decent way to start (can do set up stuff over discord).

> I think we should aim first for something too easy. Maybe we should first look for easy topics that we already have something to say about, not topics we'd have to research.

This sounds like a good plan.


Max at 12:45 PM on November 18, 2020 | #18723 | reply | quote

I've brainstormed some topics. Some of them require research and some don't.

Maybe we should hold off on doing more until we have our dedicated place for the project.


Anne B at 1:07 PM on November 18, 2020 | #18724 | reply | quote

#18724 Yup. I need a bit of time beforehand anyway. I'll brainstorm some stuff too.


Max at 1:10 PM on November 18, 2020 | #18725 | reply | quote

your life depends on getting unstuck

https://youtu.be/2xUcSFh4IkU?t=1194 - curi Philosophy Stream Highlights #4

> And a lot of times the first time someone gets majorly stuck, they stay stuck forever and it starts changing them.

There are a few ~common sense type analogies that come to mind. Like with jogging there's the idea of 'don't slow down because you won't be able to start up again'. Both jogging and learning are endurance activities. You have to maintain like a consistent minimum to keep going, or if you slow down too much you have to try way harder to get going again. A good attitude can make that easier but it's never free.

The foundation of a good learning strategy must include staying unstuck.

Your reaction time to getting stuck is like an overhead on learning. It's ~randomly distributed so on average it's like a *constant* overhead. A constant overhead *you can remove*.

You can't predict where you'll get stuck (or when or how often), but you *can* get better at staying unstuck. If you don't get better, then you get ~random periods of being stuck with ~random durations. If you just wait for them to resolve themselves your rate of progress will tend to 0.

I once summarised part of BoI as "the alternative to problem solving is death". The alternative to staying unstuck is a wasted life.

Everyone can get better at being unstuck. It's not like a super abstract idea that's out of reach. It's easy to start. Easier than you think, and there's always a new option to try. The hard part is being honest enough with yourself to start; *believing* you can take the next step, rather than actually taking it. (Don't know where to start? Don't know any techniques? Don't know if you're stuck? That's your starting point. Ask questions. Seek answers. That's part of why FI exists, so you have a place to do that.)

If you're stuck, give yourself the chance to get unstuck. You can't rely on other people looking out for you. Sometimes you can't even rely on *you* to look out you. Take matters into your own hands. Your life depends on it. You depend on it. Get unstuck.


Max at 2:24 PM on November 18, 2020 | #18726 | reply | quote

getting unstuck

You also have to get good at knowing when you're stuck.


Anne B at 3:08 PM on November 18, 2020 | #18727 | reply | quote

happy

#18727 Yes.

I am having a moment of excitement b/c I wanted to note that

gem of seeing and self-judgement are relevant things I want to link -- **things I wrote**. I feel like I have a very tiny library. But I didn't know what it felt like to have a library before. I'm very happy right now.


Max at 3:22 PM on November 18, 2020 | #18728 | reply | quote

#18728 I looked up the definition and happy isn't a strong enough word. Joy is better.


Max at 3:24 PM on November 18, 2020 | #18729 | reply | quote

#18728 Great!

BTW i've been using it but idk if *library* is the best word. I don't know many good options but there's one good word which may be better: *archive*.


curi at 3:39 PM on November 18, 2020 | #18730 | reply | quote

#18718

Great reflection.

I’d argue the benefit you got in the form of a new and good habit I.e. frequently reflecting on your own values during times of choice was worth it, even if you did know better.


Amaroni at 10:30 PM on November 18, 2020 | #18732 | reply | quote

errors in *my* thinking vs errors in *their* thinking

I think most ppl have a focus issue when it comes to thinking about errors in discussions/debates.

Errors in what other ppl say are not very interesting. But errors in what *you* say should be v. interesting *to you*. (And similarly, errors in what *I* say are very interesting to *me*.)

If you're debating someone, and they make 10 errors and you make 2 errors: how should you feel about that?

I think you should feel the same regardless of whether they made 0 errors, 10 errors, or 100 errors. You made 2 errors, and that's what you should care about. There's no such thing as "winning" if you're making mistakes; for win-win you want to have had those mistakes corrected before the end of the discussion. Those 2 errors are the only things that *you* are able to improve *without relying on others*. They're the only errors you can guarantee that you can fix. You have to be aware of them too, but after that you can fix them.

Say you have a conversation transcript of you debating someone. You show that to someone and ask for feedback. When you're being given feedback, what should you want the the focus of the feedback to be on?

The bad (but common) way of thinking about this is to want the reviewer to *agree* with you, and say things like 'your opponent was bad' and 'your arguments were good'. A little bit of that is okay, and some discussion of the opponents are okay when it's relevant.

**But!** *The **only** part of the reviewers feedback that will **actually** help you is the reviewer's **criticisms of your arguments**.*

Praise will never help you.

Criticism of the opponent will rarely help you.

The only thing that will consistently and reliably help you become a better thinker, discussion partner, debater, and philosopher is *criticism of your own errors*.


Max at 4:31 PM on November 19, 2020 | #18736 | reply | quote

#18730

> BTW i've been using it but idk if *library* is the best word. I don't know many good options but there's one good word which may be better: *archive*.

Yeah. I think I was originally a bit mistaken on what you meant by *library of criticism* (that might have been early in tutoring, maybe the yes/no videos?). I thought you were talking more about your blog -- something that *other* people could interact with. Now I think it's more like a thing in your head for checking new ideas against, and the blog is auxiliary.

*Archive* is a decent word.

*Garden* comes to mind as a metaphorical option.


Max at 5:40 PM on November 19, 2020 | #18740 | reply | quote

#18740 Computer programming has something kinda like a library of criticism: a *test suite*. Maybe in philosophy it could be called a *test suite for ideas*.


Alisa at 5:44 PM on November 19, 2020 | #18741 | reply | quote

#18741 Mm, thinking down that track: maybe like *criticism suite*, *critical test suite*, *suite of criticism*, *critical unit tests*, *unit crits*

Wrt the *archive* side, maybe like a *cache* would be a fitting description. When you change your mind on something it's akin to cache invalidation.

The idea of an art *exhibition* came to mind too; like it's the stuff that's good enough to be on display.


Max at 6:12 PM on November 19, 2020 | #18742 | reply | quote

What's the alternative to making a mistake only once?

Nearly all mistakes we make aren't a big deal as one-off-mistakes. There are exceptions, like if you crash a car (you might kill someone). Maybe doing/learning philosophy, too. Mistakes are inevitable, so for most things making ~zero mistakes long-term isn't practical and might make that thing harder.

If mistakes are going to happen, they have to happen at least once.

I think there's a common intuition that the alternative to making a mistake only once is making the mistake 2 or 3 or 4 times, etc. *This is wrong.*

The real alternative to making a mistake once is *making a mistake infinitely many times*. That's what most people do, and that's what getting stuck is. If you make a mistake a handful of times you might not even notice it, esp. if it's not a bottleneck.

If you haven't made the choice to prioritise a learning attitude, then why would making a mistake the 2nd or 3rd or 4th time be any different to the first? If you have a learning attitude, then each mistake is an opportunity. Each mistake is *potentially* the last time you make it. If you don't have a learning attitude then you will only stop making mistakes by *~luck*.

If you don't have a learning attitude, then a new mistake is potentially the beginning of an *unbounded* series of mistakes. That will mean you have a worse life.

You should want to *avoid this situation at all costs*. Luckily, if you are stuck, you're always at the *beginning* of that unbounded series of mistakes. **You can choose to apply bounds by adopting a learning attitude!**

Everyone is at the beginning of *an* infinity. It's your *choice* whether that is *your* infinity, or your *mistakes'* infinity.


Max at 7:38 PM on November 19, 2020 | #18744 | reply | quote

1st order stuck vs 2nd order stuck and structural epistemology

Note: I think this is an *advanced* topic on learning. I mention this for two reasons: 1. it's not something ppl should be worried about early on, and learning the basics is more important; and 2. I'm less sure about this than my other posts on learning so far. I would still be surprised if there were *significant* errors, though. I'm like 8/10 confident on 1st and 2nd order stuck. I'm 9+/10 confident on the structural epistemology parts.

What happens when two people learn the same thing? It's common sense that there will be some differences in what they learn, how well they can apply it, etc. Some of that is due to pre-existing knowledge, but is there more to it?

## can two people ever learn the same thing?

Consider: Alice and Bob want to learn about something particular, and have similar background knowledge. The first two concepts they need to learn are *X* and *Y*. After that, there's a third concept, *Z*, that builds on both of those. There's also concept *W* that's sort of similar to *(X,Y,Z)* but a bit different too.

Let's consider a concrete example: building a computer. Alice and Bob have finished high school and want to build a computer over the christmas break. They will need to learn about the components of a computer so they know what to buy (cpu, motherboard, ram, ssds and/or hdds, the case, etc). They also need to know some super-basics about electronics: how to plug all their components together, what cables they'll need to use or buy, calculating power consumption, etc. Those are concepts *X* and *Y*. Concept *Z* is how all the components work together in a complete computer and which configurations work for particular purposes (e.g. gaming, office work, video editing, streaming, music creation, digital art, etc). They need to know some other things too (like how to install an operating system, and background knowledge) -- we don't need to consider those things for this example.

Once they know *Z* they can choose and buy their components and then put it all together. They're both successful.

As it happens, both of them applied for electrical engineering courses at university, and both are accepted. At the end of their break, they take their computers and their knowledge about computers -- *(X,Y,Z)* -- with them to to university.

Before we continue let's think about what Alice and Bob learned while studying *(X,Y,Z)*. Can we find out if they learnt *the same thing* or not? What does *the same thing* mean in the context of learning?

I think there are two important ways to look at whether they learnt *equivalent* things or not. One of them is about *the tasks they can perform*; and the other is about *the ideas themselves*.

If we're only concerned with the *results* that certain knowledge gives, we are talking about the knowledge's *denotation*. If Alice and Bob can *perform the same tasks* (they get equivalent results with only negligible differences) then we say they have the same *denotational knowledge*.

We can say that Alice and Bob learned the same thing because they both built a computer, and they can both answer the same questions about the configurations that make sense for certain use cases. This is like *a standardized test* that are common in schools. It's a definition of a checklist of inputs (questions) and outputs (answers) that students should repeat. For some types of tests, like text analysis in English, the answers aren't explicitly listed; rather, qualities of good answers are listed (like 'identifies techniques' or 'discusses the interaction of themes', etc). For other tests the answers are explicit (e.g. multiple choice tests); and finally some tests have a mix of both (like maths tests, where the final answer is explicit but the algebra to get there is not).

What if we're concerned with the other option: *the ideas themselves*? How can we compare those?

We can't directly observe ideas. Even if we could see inside Alice and Bob's brains (something they might not like), how would we know what to look for? We can't just ask them either: they can't tell us exactly what their ideas are, and we can't ask them questions on the topic either -- that would basically be like a standardised test. So how do we know if they learnt equivalent things?

Even though we can't *directly* observe ideas, there is a way ideas are used other than to produce results -- *ideas are building blocks for other ideas*. This means that if Bob and Alice learnt *the same thing*, then they should be able to build similar *new* ideas with their *(X,Y,Z)* building blocks.

Alice and Bob will learn a new idea similarly if they have similar building blocks -- if their knowledge has the same *structure*. It's like they have same same lego set of ideas. If their knowledge has *different* structures, then they can't build the same things, like if they had lego sets with different pieces. *Sometimes* they can build the same things, but *not always*. We can say Alice and Bob have the same *structural knowledge* if they can build the same ideas.

Let's consider Alice and Bob learning a basic concept, *W*, about electronics in first yr uni and how it might interact with *(X,Y,Z)*. *W* is similar to *(X,Y,Z)* but also different. Alice and Bob are told that to make electronics you need components and one of a circuit board, or a bread board, or maybe just a mess of wires. They're told about attaching components to each other, and power, data, and ground and things like that. This is concept *W*. Alice and Bob each have a different question for the tutor:

Bob asks:

> How do you connect components if they're the wrong shape, or have different wires?

His full idea was something like: computer components connect together using cables or directly using sockets. The wire components of particular cables go with particular connectors only -- they don't go with other connectors. The connectors you need on a cable are the male/female versions that correspond to the connectors on the devices. If a device connects directly, then you can use a cable with one male and and one female end to connect the device somewhere else. If you plug everything together with matching sockets, then it'll all work out.

Bob asked his question b/c his understanding was at the level of emergence of cables and connectors and things you could plug together.

Alice asks:

> How can we replace components if some components are out of stock or too expensive?

Alice had a different idea, something like: sockets and connectors are chosen to make sure ppl plug the right things together. Manufacturers choose particular wires and shrouding based on: availability and price, the requirements of the components being connected (i.e. standards like HDMI), what the customer expects, and how the cable will be used (are there lots in a bunch, does it need to go round corners, etc). You can cut up multiple cables and join them together (splicing) to make cables with different combinations of male/female connectors, to change between compatible connectors (an adaptor), or to replace faulty wires -- provided you are combining wiring of high enough quality (excess capacity). Cables are only there to deliver power to components or transmit signals between them.

Alice asked her question b/c her understanding was at the level of emergence of wires and semi-conductors with some economics thrown in.

I hope you can see how their knowledge differs in *structure* even thought they're both able to use it to put together the same computers, diagnose the same problems, know which replacement parts or upgrades to buy, etc.

*Structural knowledge* matters when we want to *build on*, or *change* knowledge. When we want to use it for different things, or apply it to new situations. If the *structure* of *(X,Y,Z)* is different in different people, then they can still have the same *denotational* knowledge, but they will diverge when they learn new things.

Just because some knowledge has the same *denotation* does not mean that it has the same *structure*.

We do know how to record some elements of *structural* knowledge. curi gives some examples in Structural Epistemology Introduction Part 1 and Structural Epistemology Introduction Part 2.

But ideas in the mind are different from ideas that are written down. We don't know how to compare ideas directly. However, we do know some things about people and how ideas are created. Ideas aren't written in to your brain like things are written on paper. Ideas are created through an evolutionary process -- exactly *how* we don't know. You learn something when your brain creates an idea that explains the things you're trying to understand -- or, for simple things, when you can repeat an action or achieve a result. That means your brain needs to combine pre-existing ideas repeatedly (thinking) until it finds an idea that satisfies some success criteria. Your brain can do a lot of auto-criticising; that's when you're *thinking* but it's like *work*, like you're waiting for your unconscious mind to tell you the answer. Sometimes you have an idea that's *nearly* right, only to learn of a criticism later (maybe you came up with it or someone else did). Our brain makes *guesses* and nothing will guarantee any of those guesses are correct, but we *can* know when something is *not* correct if we know a criticism of it.

When two people learn the same thing, they might have the same denotational knowledge (within some scope), but they'll ~never have exactly the same structural knowledge. There will always be some differences.

## 1st order stuck and 2nd order stuck

When you get stuck, there are two ways that can happen.

First, you can *stop learning all together* (this can be specific to a single topic or it can be bigger, too). This is *1st order stuck*. It's the normal kind. It's fixed in the normal ways and normal techniques work. In the example above, Bob was first order stuck, he just needed to learn some new things about circuitry to move on.

You *can* be 1st order stuck because of problems with your *learning method*. You could be doing the wrong type of practice, or not learning from certain media types (e.g. if you hated videos), or overreaching in general.

Second, you can be making a *mistake in the act of knowledge creation*; you can have a *structural* problem with *the results of your method*. This is like whether you're creating highly general knowledge or not. This is different to 1st order stuck b/c *you can still make progress*, and technically *that progress can still be unbounded*. But you will need to do more maintenance of your existing knowledge and it will be more fragile.

This is being *2nd order stuck* and means **the rate of your learning will suffer.** You won't be able to come up with ideas as well as exceptional people, you won't be able to use ideas as well as exceptional people, and most importantly *you won't be able to learn as fast as exceptional people*. It's overhead on your *velocity and acceleration*, not on the distance you can travel.

2nd order stuck isn't as clear cut or pervasive as 1st order stuck typically is. It can vary by topic.

Authors note: I think this is currently the limit of my understanding.

How do you tell if you're 2nd order stuck and on what? How do you learn/practice/fix such a mistake? What are the techniques to get 2nd order unstuck?

Can you turn 2nd order stuck into 1st order stuck? ... maybe? The only way I can think of is to learn about learning, but I'm not sure that's enough.

## Further reading

- <https://curi.us/988-structural-epistemology-introduction-part-1>

- <https://curi.us/991-structural-epistemology-introduction-part-2>

- <https://fallibleideas.com/knowledge-structure>

- <https://curi.us/1497-programming-and-epistemology>

- <https://curi.us/1370-fragile-knowledge>


Max at 12:25 AM on November 20, 2020 | #18748 | reply | quote

The above post/essay/thing is the longest (good) FI think I've written I think. It took me 2.5hrs to write. I don't think I was distracted for much of it, if any. I only paused a few times to think of what to say next and those times were for less than 60s (probs less than 20s but I didn't measure them). It's nearly 2000 words long, and I'm pretty happy with how I did. I think it might have more original thinking than usual which I'm excited about (the 1st/2nd order stuck, not structural epistemology).

I was considering not putting in the example when I started. I think it was probably good that I did. That said, it took longer to write because I included the example, and I'm not sure it was necessary or efficient. Either way, I do think it makes things clearer.


Max at 12:33 AM on November 20, 2020 | #18749 | reply | quote

project: book on learning - brainstorm of content

I had a go at a first draft for an outline/structure for my project on a series of posts on learning (and maybe a book). If every entry was a post (excluding further reading) then there'd be ~70 posts/chapters/things w/ this structure (I added some while writing this, so more than 70). I imagine many of them would be like 500 words on average, so it'd be 35k words total. That seems do-able, but is much bigger than anything I've taken on before. On the other hand, I have written like 3k+ words on this already (maybe more like 5k) so that's like 15% of the way there without really trying.

I think I could write at least a few hundred words on each of these topics (more on some, less on others).

I think I'd write the book using examples and things from speedrunning. There are a few reasons for this:

- speedrunning is an easy topic to analyse and apply techniques to

- there's already a good amount of speedrunning related knowledge/analysis on FI

- speedrunning is a highly competitive sport where knowledge and practice are both highly valued; people care about techniques and efficiency -- overlapping values

- it's popular, so a book like this could help lots of people -- large audience (even a tiny success in the speedrunning community is big relative to FI)

- seems like those people are starved for content to some extent, and keen to find an edge -- demand

- easy to measure results -- evidence

Popularity, money, fame, etc are not goals for me w/ this project.

My goal is to learn philosophy (and particularly about learning) and establish a track record. There's a minor goal to produce work that is useful for FI ppl.

I'm also considering demonstrating some of the techniques via speedrunning myself. If I can write a book on learning, why can't I use the techniques to achieve something exceptional? Well, my experience speedrunning is minimal but non-zero (that's a risk) and I didn't excel at games growing up so there might be some ground to make up. It also might be a big investment of time, which is not something I think I can necessarily afford; but I will need downtime so doing it during that is an option. I can always do speedrunning as a hobby and collect data as I go along. There's a big risk in focus, too, I don't want to compromise learning philosophy for the sake of speedrunning.

Outline:

* Intro

* Gem of Seeing

* Goals and Achievement

* The Alternative

* Learning First

* Why

* Attitude - I - Life

* Stuck - I

* Choices

* Attitude - II - Step by Step

* Life Isn't a Speed Run

* Main Quest

* Side Quests (Indirection)

* Attitude - III - Problems

* Criticism & You

* Perspective

* On Yourself

* On Life

* On Problems

* On Others

* Opportunities

* Creation

* Exploitation

* Attitude - IV

* Alignment

* Honesty - I

* Conflicts

* Priorities - I

* Your Future

* Direction

* Freedom & Autonomy

* Bounded vs Unbounded - I

* You Can be a Beginning of Infinity

* What / When / Where

* Learning - What it means

* Learning cycle

* Progress

* Bounded vs Unbounded - II

* Bottlenecks & Capacity

* Focus - I

* Priorities - II

* Mistakes

* Stuck - II

* Honesty - II

* Overreaching - I

* How

* Humility & Doubt

* Questions - I

* Overreaching - II

* Honesty - III

* Priorities - III

* Focus - II

* 1-3 things

* Questions - II

* Hooking

* Excess Capacity

* Self Judgement

* Overreaching - III

* Questions - III

* Sense of Life (?)

* Creation

* Bounded vs Unbounded - III

* Ideas

* Judgements

* Stuck - III

* Getting Unstuck

* Questions - IV

* Advanced

* Structural Epistemology

* Learning, the Mind, and AGI

* 1st / 2nd Order Stuck

* Questions - V

* Overreaching - IV

* Further Reading & Acknowledgements

* Fallible Ideas & Elliot Temple

* Goldratt

* Rand

* Deutsch

I had a few thoughts on a title (I have not focused on it), and I've just been thinking of it as called "Learning". In parallel w/ reading Rand's *Philosophy* I started thinking of titular variations like "Learning: Who Needs It?" or "Learning: You Need It". TBH I don't think that's a great idea, and I think I like just "Learning" more anyway.

I'm not sure about breaking it up into Why, What/Where/When, and How. I think it makes some sense in that it's like: convince the reader this matters, explain the concepts and building blocks, show how to put it all together. The sections/parts can be loose too, like just broad strokes, not hard and fast categories. I think leaving advanced stuff to the end is good, though. It marks it as more optional, too.

I don't think I'd write it in the order that it is here. Some latter bits will be easier, etc. I think I'll need the most time on the "Why" section.

I posted the outline to my site; this is where I'll update it in future. I'll post here if there are major updates. https://xertrov.github.io/fi/posts/2020-11-20-learning-book-content-brainstorming/


Max at 2:53 AM on November 20, 2020 | #18750 | reply | quote

I forgot about the lack of list support on curi.us -- check my site for the correct indentations.


Max at 2:54 AM on November 20, 2020 | #18751 | reply | quote

how to do FI

This is a bit of a guess at a general method for doing FI. I think it's roughly what I'm doing. there are probably things I've missed. feedback welcome ofc.

----

First, you have to want to learn and read and write down your thoughts. If that isn't the case then you have a problem with mindset/attitude. You might need to find a decisive reason to want to start writing (I needed this). My reason is establishing a track record. I don't think I can do the things I want to do without getting better at philosophy, and I need a track record for that and to be taken seriously (and to expect to be taken seriously). Writing is now a direct part of my plan (my *method*, not just my *goal*), so I want to do it. There are other pitfalls you can run into at this stage, like if you don't want to learn or read.

Then:

- first, write down all your new thoughts with priority. don't expect ppl to read them, you're writing them down for you (to practice writing, to get them clear in your mind, to have a track record, to expose them for criticism, etc). one exception to this is if you have too much to write and can like go and go and go. I don't think that's v common but that's a different situation if you fall into that bucket.

- if you run out of things you want to write down, or don't feel like it at that time, then you should learn by reading new posts or watching curi videos/podcasts or consuming good stuff (mb high concentration like Rand or Goldratt or low concentration like okay stuff on YT). take notes, esp questions. you don't need to take like lots of notes like ppl do at uni. note down important things.

- you should keep up with your discussions as much as you can, and with priority over new discussions and new materials. it's okay if you want to end/postpone a discussion because you think there's something you should learn first, but if that's the case you should say so. if you need to abandon a discussion, then it's better to say that's what you're doing as early as possible rather than to not say that.

- you should try to make as effective use of criticism as possible. as you get better at that you can have much shorter conversations before realising that an error happened and what/how/why/etc. **do postmortems**, you don't have to for everything, but it's good practice to do it for simple stuff and it's really important to do them for large things when you understand the mistake. you want to expose your EC to criticism, too.

- get used to having a content backlog and being decisive about what you want to focus on. get used to organising your time so you can keep commitments around discussions and things. practice being consistent and not evading or abandoning things you start.


Max at 1:06 AM on November 22, 2020 | #18803 | reply | quote

re Max's

https://xertrov.github.io/fi/posts/2020-11-20-1st-order-getting-stuck-vs-2nd-order-getting-stuck-and-structural-epistemology/

which asks:

> can two people ever learn the same thing?

see also

http://fallibleideas.com/originality

in particular the part talking about:

> here's a metaphor to help understand the issue: **everyone's mind has its own programming language.**


curi at 4:54 PM on November 22, 2020 | #18823 | reply | quote

I read *Anthem* yesterday. It's quite short (like ~1hr to read casually) and I recommend it.

It reminded me a lot of *1984*, esp at the beginning. *Anthem* deals with the philosophy a bit more directly, tho, and is written more simply. (At least from what I can remember of 1984.)

In Australia, 1984 is a text that's sometimes (always?) studied in advanced high school English, and I was a bit surprised yesterday that Anthem wouldn't be included in that module. 1984 spends a lot more time on thoughtcrime / doublethink / controlling thought via language, but the essence is there in Anthem of all of that. IDK why Anthem isn't included in that module, but I hope it's not the common anti-Rand attitude (I suspect it is due to this).

I thought the setting of Anthem was a lot more believable than 1984; like if you're going to destroy ppl's ability to think then it doesn't make much sense to have a highly advanced and somewhat productive society. Though 1984 is set in the near future and Anthem is set like ~hundreds of years in the future. It always annoyed me a bit that 1984 was set like 40 years after Orwell wrote it (1948) but nobody can remember anything from 3 decades ago.


Max at 5:35 PM on November 24, 2020 | #18837 | reply | quote

Note that Anthem was published in 1937, well before 1984's publication, and before seeing WWII or its aftermath with the USSR.

Yes Rand's is more realistic. Rand understands how socialism is incompatible science and wealth.

There are other kinda similar books. Everyone mentions *Animal Farm* (good IMO) and *Brave New World* (read long ago, liked it fine at the time) but not *This Perfect Day* (I liked it) or *We* (disliked first few pages, plan to try reading it again but haven't yet).

Different but kinda related is *One Day in the Life of Ivan Denisovich* which is kinda like a really short version of some of *Gulag Archipelago*. It's about the actual USSR instead of a sci-fi dystopia with some inspiration from Russian communism. Rand's *We The Living* is also about the actual USSR and it's very good and gets less attention than it deserves compared to AS and FH (it's not as good as them but it's still a great book, and for Rand's fiction AS/FH get ~all the attention).


curi at 6:34 PM on November 24, 2020 | #18840 | reply | quote

> *We*

I was introduced to this book some years ago, essentially as '1984 but before 1984 was written'. I'm not sure if there are multiple translations, but I'm pretty sure the original was in russian. (googled it: yup, also it was written in 1920-21!)

> Note that Anthem was published in 1937

Yeah, I noticed that and wondered if Orwell had read *Anthem*

> well before 1984's publication, and before seeing WWII or its aftermath with the USSR.

Ppl seem to have this idea of science that glorifies prediction (though they apply it inconsistently, e.g. the explanation of quantum computation from MWI is often ignored). This glorification of prediction seems to be inconsistently applied to literature too.

---

> Rand understands how socialism is incompatible science and wealth.

This is a big one that ppl don't seem to get. I think most ppl think that progress and wealth are like independent of systems like communism, and they use excuses like 'those ppl just didn't do communism right'.

On the communism-excuse note: everyone in favour of capitalism could argue the same thing -- nobody has done capitalism right either! Like, in the last 100 yrs, when was capitalism done right? It wasn't. So by the socialists' logic: they can't argue against capitalism on the basis of how things are *now, in contemporary 'capitalist' systems,* for the same reason the *do* argue that crits of communism don't apply because ppl didn't do it right. Their logic is flawed, ofc, but I only noticed that contradiction during/after reading *Anthem*.

---

I liked *Animal Farm* too. Haven't read *Brave New World* or the others you mentioned, though.

---

PS. I added http://fallibleideas.com/originality to further reading on https://xertrov.github.io/fi/posts/2020-11-20-1st-order-getting-stuck-vs-2nd-order-getting-stuck-and-structural-epistemology/. thanks for that


Max at 6:55 PM on November 24, 2020 | #18841 | reply | quote

#18841 Did you read *We*? If so, do you think it's good?


curi at 7:00 PM on November 24, 2020 | #18843 | reply | quote

#18843 No. It's only like 2x the length of Anthem, though, so might do that tonight + tomorrow. Will post here if I do.


Max at 7:11 PM on November 24, 2020 | #18844 | reply | quote

#18844 I think I was way off on the length thing here, *We* is more like 4x.

I googled for the length and this link mentions ~1hr at 500 WPM. After I posted #18844 I started to wonder how that could be accurate b/c I didn't push my speed while reading it, so the numbers started to feel wrong. This site has it at about 1hr to read, too, but 250 WPM (which sounds better) and has the length at ~15k. *We* is like 62k words.


Max at 7:15 PM on November 24, 2020 | #18845 | reply | quote

#18845 just check book lengths yourself. i pasted it into text editor and it's 19k after deleting the text before and after the actual novel part.


curi at 7:31 PM on November 24, 2020 | #18846 | reply | quote

#18841 I haven’t read *Animal Farm*, *1984*, or *Brave New World*. I did look at the below linked video summaries of them just some weeks ago. I mostly liked them. I don’t know if the summaries presnented the books well though. Maybe someone who has read the books can comment on the video summeries.

*Animal Farm*

https://www.youtube.com/watch?v=BFP1IMyKyy4

*1984*

https://www.youtube.com/watch?v=h9JIKngJnCU

*Brave New World*

https://www.youtube.com/watch?v=raqVySPrDUE


deroj at 6:26 AM on November 25, 2020 | #18848 | reply | quote

#18848 I think videos like that help teach you *about* a book but they don't really help you *learn* the book. Like you don't learn the things the book has to teach you via those videos. They did remind me of plot points I forgot though. I am not sure the 1984 video mentioned "doublethink" for example, but that's a major part of the book.

To get an idea of how much you miss: here's another 1984 video. How much was different between the two? Which ideas were only in one of them? If those two videos are that different, how much was in the book that wasn't in either video?

As far as like reminders / summaries go, the first two are okay, but I don't think they're a replacement for the books.


Max at 7:05 AM on November 25, 2020 | #18849 | reply | quote

thoughts on Anthem and 1984

Watching the video deroj linked on 1984 made obvious some of the major similarities it has with Anthem. like the protagonist journaling, language/thought control, a love affair, and a spark of rebellion.

I first thought that one big difference was a happy vs sad ending -- Anthem is hopeful, 1984 is not. But the reason for that, and a more significant factor, is that Rand writes about Heroes, but 1984 and similar books are about normal ppl. That's why her books end hopefully, and books like 1984 don't. I think that's also why ppl think Rand's books are unrealistic: *ppl don't believe heroes exist*. (curi's talked about ppl's complain that Roark isn't realistic, which is where I'm drawing some of these ideas from.)

A world without heroes is sad. It's a world where you can't be exceptional, you can just be better than average. A world in which you can't hope to make breakthroughs and progress, you can just push the needle a bit. It's a world for Sisyphus and sisyphites.


Max at 7:14 AM on November 25, 2020 | #18850 | reply | quote

> *ppl don't believe heroes exist*

Hmm, not sure about that. e.g. look at how ppl treat Musk. I think I'm missing something.


Max at 7:18 AM on November 25, 2020 | #18851 | reply | quote

#18849

> I think videos like that help teach you *about* a book but they don't really help you *learn* the book. Like you don't learn the things the book has to teach you via those videos.

I agree. I think it depends on what kind of book it is and who made the video summary, but generally I think one can pick up some major themes of a book in these kinds of videos.

My goal was to learn a little bit more *about* the books, but I didn’t want to read them. They seem very depressing to me. I do not like to engage with depressing stuff.


deroj at 10:40 AM on November 25, 2020 | #18852 | reply | quote

#18851 People commonly don't think they or a regular person can be a hero. Partly this seems like an excuse and partly like they see a huge gap between themselves and *high social status persons*. In various ways I think people confuse heroism, or virtue or merit in general, with social status, and expect the two to match.

People don't explicitly deny the possibility of an undiscovered hero in general, but most people sure aren't going to be the one to recognize that. (The two passages about pretzel's and middlemen in The Fountainhead are relevant.) And they will get offended by a bunch of traits that violate social rules, even if they'd ignore or forgive those traits in a person who already had social status. People's judgments re conformity tend to be very biased. Social rules aren't very objective. People mostly try to make their own conclusions about social status fit whatever they think that a lot of other people already concluded.


curi at 11:13 AM on November 25, 2020 | #18853 | reply | quote

#18852

> I agree. I think it depends on what kind of book it is and who made the video summary, but generally I think one can pick up some major themes of a book in these kinds of videos.

Yeah. Videos like that can be useful, but they're not a replacement. I watched a few more 1984 videos after those 2 last night, and if you *are* going to watch summary videos like that I think it's well worth watching 3 or 4 or more. Sort of like a second opinion on medical stuff. You don't want to get stuck b/c of one persons bad ideas.

> My goal was to learn a little bit more *about* the books, but I didn’t want to read them. They seem very depressing to me. I do not like to engage with depressing stuff.

Yeah, that seems a fair goal.

Depressing? IDK. In some ways they're the opposite b/c those things *aren't* happening, but I think I see what you mean. They're about people being broken and suffering and having things they love or value being taken away, etc.

> I do not like to engage with depressing stuff.

If that's because you don't like the impact that stuff has on you, I think that can be solved with mindset.


Max at 3:53 PM on November 25, 2020 | #18856 | reply | quote

#18853

> partly like they see a huge gap between themselves and *high social status persons*.

Yeah, this makes sense.

> In various ways I think people confuse heroism, or virtue or merit in general, with social status, and expect the two to match.

Now, I think that ppl say Rand's heroes are unrealistic b/c in her novels heroism and social status don't match (often the opposite). It's almost like ppl judge e.g. Roark as unrealistic b/c they think that a real-life version would have high social status. (They probably also think that real-life Roark would compromise to achieve things of magnitude -- and they don't know how to tell the difference between that and Great achievements.)

> People mostly try to make their own conclusions about social status fit whatever they think that a lot of other people already concluded.

!!!!

Quotes from *The Fountainhead*

part 2 ch 10:

> The battle lasted for weeks. Everybody had his say, except Roark. Lansing told him: “It's all right. Lay off. Don't do anything. Let me do the talking. There's nothing you can do. When facing society, the man most concerned, the man who is to do the most and contribute the most, has the least say. It's taken for granted that he has no voice and the reasons he could offer are rejected in advance as prejudiced--since no speech is ever considered, but only the speaker. It's so much easier to pass judgment on a man than on an idea. Though how in hell one passes judgment on a man without considering the content of his brain is more than I'll ever understand. However, that's how it's done. You see, reasons require scales to weigh them. And scales are not made of cotton. And cotton is what the human spirit is made of--you know, the stuff that keeps no shape and offers no resistance and can be twisted forward and backward and into a pretzel. You could tell them why they should hire you so very much better than I could. But they won't listen to you and they'll listen to me. Because I'm the middleman. The shortest distance between two points is not a straight line--it's a middleman. And the more middlemen, the shorter. Such is the psychology of a pretzel.”

and part 4 ch 1:

> Kent Lansing said, one evening: “Heller did a grand job. Do you remember, Howard, what I told you once about the psychology of a pretzel? Don't despise the middleman. He's necessary. Someone had to tell them. It takes two to make a very great career: the man who is great, and the man--almost rarer--who is great enough to see greatness and say so.”


Max at 6:34 PM on November 25, 2020 | #18860 | reply | quote

> > People mostly try to make their own conclusions about social status fit whatever they think that a lot of other people already concluded.

> !!!!

FH with my emphasis:

> “Peter, you’ve heard all this. You’ve seen me practicing it for ten years. You see it being practiced all over the world. Why are you disgusted? You have no right to sit there and stare at me with the virtuous superiority of being shocked. You’re in on it. You’ve taken your share and you’ve got to go along. You’re afraid to see where it’s leading. I’m not. I’ll tell you. The world of the future. The world I want. A world of obedience and of unity. *A world where the thought of each man will not be his own, but an attempt to guess the thought in the brain of his neighbor who’ll have no thought of his own but an attempt to guess the thought of the next neighbor who’ll have no thought—and so on, Peter, around the globe.* Since all must agree with all. A world where no man will hold a desire for himself, but will direct all his efforts to satisfy the desires of his neighbor who’ll have no desires except to satisfy the desires of the next neighbor who’ll have no desires—around the globe, Peter. Since all must serve all. A world in which man will not work for so innocent an incentive as money, but for that headless monster—prestige. The approval of his fellows—their good opinion—the opinion of men who’ll be allowed to hold no opinion. An octopus, all tentacles and no brain. Judgment, Peter? Not judgment, but public polls. An average drawn upon zeroes—since no individuality will be permitted. A world with its motor cut off and a single heart, pumped by hand. My hand—and the hands of a few, a very few other men like me. Those who know what makes you tick—you great, wonderful average, you who have not risen in fury when we called you the average, the little, the common, you who’ve liked and accepted those names. You’ll sit enthroned and enshrined, you, the little people, the absolute ruler to make all past rulers squirm with envy, the absolute, the unlimited, God and Prophet and King combined. Vox populi. The average, the common, the general. Do you know the proper antonym for Ego? Bromide, Peter. The rule of the bromide. But even the trite has to be originated by someone at some time. We’ll do the originating. Vox dei. We’ll enjoy unlimited submission—from men who’ve learned nothing except to submit. We’ll call it ‘to serve.’ We’ll give out medals for service. You’ll fall over one another in a scramble to see who can submit better and more. There will be no other distinction to seek. No other form of personal achievement. Can you see Howard Roark in the picture? No? Then don’t waste time on foolish questions. Everything that can’t be ruled, must go. And if freaks persist in being born occasionally, they will not survive beyond their twelfth year. When their brain begins to function, it will feel the pressure and it will explode. The pressure gauged to a vacuum. Do you know the fate of deep-sea creatures brought out to sunlight? So much for future Roarks. The rest of you will smile and obey. Have you noticed that the imbecile always smiles? Man’s first frown is the first touch of God on his forehead. The touch of thought. But we’ll have neither God nor thought. Only voting by smiles. Automatic levers—all saying yes ... Now if you were a little more intelligent—like your ex-wife, for instance—you’d ask: What of us, the rulers? What of me, Ellsworth Monkton Toohey? And I’d say, Yes, you’re right. I’ll achieve no more than you will. I’ll have no purpose save to keep you contented. To lie, to flatter you, to praise you, to inflate your vanity. To make speeches about the people and the common good. Peter, my poor old friend, I’m the most selfless man you’ve ever known. I have less independence than you, whom I just forced to sell your soul. You’ve used people at least for the sake of what you could get from them for yourself, I want nothing for myself. I use people for the sake of what I can do to them. It’s my only function and satisfaction. I have no private purpose. I want power. I want my world of the future. Let all live for all. Let all sacrifice and none profit. Let all suffer and none enjoy. Let progress stop. Let all stagnate. There’s equality in stagnation. All subjugated to the will of all. Universal slavery—without even the dignity of a master. Slavery to slavery. A great circle—and a total equality. The world of the future.”


curi at 11:17 PM on November 25, 2020 | #18862 | reply | quote

> #18852

> Yeah. Videos like that can be useful, but they're not a replacement. I watched a few more 1984 videos after those 2 last night, and if you *are* going to watch summary videos like that I think it's well worth watching 3 or 4 or more. Sort of like a second opinion on medical stuff. You don't want to get stuck b/c of one persons bad ideas.

Good point re watching multiple videos.

> Depressing? IDK. In some ways they're the opposite b/c those things *aren't* happening ...

IDK. To me that sounds like lowering the standard in some way. In a "~things could be worse" rather than "~things could and should be better" kind of way.

> but I think I see what you mean. They're about people being broken and suffering and having things they love or value being taken away, etc.

You are correct. It's this part that I find depressing. I think it influences me negatively (makes me less happy) when reading / watching it.

> If that's because you don't like the impact that stuff has on you, I think that can be solved with mindset.

That's a fair point. I don't think it is a big issue for me that would require me to put in dedicated work into changing it. I can read / watch some stuff of this sort ("dystopian realism", see below) without it having big and long lasting negative effect on me. Like I wouldn't enjoy it, but I wouldn't be depressed for long after I quit reading / watching it.

I haven't dedicated much thought to this but I think that it's possible to split the dystopian stories into at least two bigger genres (or maybe styles is a better word for it?): "dystopian realism" and "dystopian romanticism".

The former would be something like "capturing the moment of everyday life in a dystopian setting" (mainly misery of some sort) and the latter would be more like a "success story in a dystopian setting". I do not think that I find the latter depressing.

I do not think that a "success story" necessarily needs a happy ending, but it need some kind of greatness. I'm not sure. I remember liking Cyrano de Bergerac, for example, despite it not being a happy ending kind of story.


deroj at 7:46 AM on November 26, 2020 | #18863 | reply | quote

(This is an unmoderated discussion forum. Discussion info.)