[Previous] curi's Microblogging | Home | [Next] Less Wrong Banned Me

Elliot Temple on September 13, 2020

Messages (74)

Overreaching, greatness, and ~meta-knowledge

Consider people who are *great* (like exceptional) at something in particular.

One of the things that makes them great is ~*meta-knowledge*, like knowledge about context regarding their *actions*.

I watched a bit of a recent Sea of Thieves WR speedrun - particularly the events during 7:25:00 -> 9:00:00 (it's like a 21hr run).

They lost like 1:20:00 from a choice to steal another crews loot b/c that crew chased them for a decent while.

A third ship joined in a bit, too.

Near the end of this chase (8:49:00) they spot another sloop (ship of 2 crew) and one guy jokes about taking this new ship's loot.

The two speedrunners have been talking about what to do at this point, and particularly risk/rewards tradeoffs for how to sell the loot.

The two guys are good enough to - ordinarily - take on another sloop no problem.

after all, they just fought off 2 other crews of sizes 4 and 3.

their choice not to go after the sloop (and the humor of the joke) is based in this like ~*meta-knowledge~ type stuff.

it doesn't matter how great you are at something, even the best ppl in the world know there are some challenges they won't win (or it's too much a risk), and the choose to back off. they're not OP just because they're the best in the world.

Generalising this means something like: the ~meta-knowledge is *at least* as important as the knowledge about how to do the skill well (which is more like technical knowledge). Or, at least it's that important at high levels.

Basically, this is like "don't overreach", or rather, if you do overreach, don't expect to *still* be great. the ability to pick challenges is part of the reason great people are great. sorta like flying *close*, but not *too close*, to the sun.

It also relates to knowing your limits, either when something is too big a task or when (and what) to learn before doing it.

This offers a bit more clarity for an ongoing conflict of mine - something to do with learning styles and methods. I intuitively think that 'exploratory' style learning (with a high(er) error rate) has benefits. and I mean it's not as bad as doing nothing at all (I guess it could be sometimes), but it's not as efficient as directed and non-overreaching learning.

I think part of the reason I have this conflict is in essence thinking too much of my own skills. That's true even tho, I went through a few ~breakpoints early on in the *Tutoring Max* series. (Breakpoints might not be the right word, but I think there are like significant points of increased ~reach when we adopt new and better ideas about ourselves)


Max at 1:30 AM on September 15, 2020 | #18021 | reply | quote

Mini post.m on post formatting

curi.us doesn't format exactly like markdown does. with markdown a newline between consecutive sentences doesn't make a new paragraph, but it does here.

I wrote the above in vscode (posted also on my site in a new category), so wrote it like normal markdown - using linebreaks to make sentences clearer, easier to read/write while editing, etc. (How the paragraphs are meant to look)

The solution is I'll need to check for that beforehand. I could write a short script to strip linebreaks between consecutive sentences, but not sure that's worth it.


Max at 1:34 AM on September 15, 2020 | #18022 | reply | quote

Inefficient learning is like eating the seed corn

> an ongoing conflict of mine - something to do with learning styles and methods.

Sometimes I prioritize the wrong thing. I'll spend time on fun (and maybe even slightly useful) 'intellectual' activities like coding, instead of doing more structured, efficient, and goal directed learning. That's like eating the seed corn.

It's like: I end up fed, and I still have some seed corn left over, but the harvest isn't going to be as good. What's the point of learning and thinking if not for the harvest?

indirectly related: #18025 and https://curi.us/2378-eliezer-yudkowsky-is-a-fraud


Max at 2:22 PM on September 15, 2020 | #18032 | reply | quote

#18032 Metaphorically seed corn = capital = e.g. machines. it's stuff you can save for later. time is somewhat different in that you can't save it for later. but, like money, you can spend time on things with later benefits (investment rather than consumption).


curi at 2:29 PM on September 15, 2020 | #18034 | reply | quote

#18034

> time is somewhat different in that you can't save it for later. but, like money, you can spend time on things with later benefits (investment rather than consumption).

I'm not sure if we disagree on something or not. I think we roughly agree but I'm thinking of time spent in a specific way (just a subset of the time we get). For context, I read curi.us/2378 a few minutes before having that idea. I liked these bits particularly (and liked being reminded of them)

> capital goods, not consumption goods

> accumulation of capital increasing the productivity of labor

I think time can be sometimes seen like money and other capital goods. How do people save money? One option is a bank account, but that performs poorly, and is sort of like investing in loans/debt, anyway. Better investors save money by spending it *on capital goods* (and they choose the goods). After they spend money, they don't have it any more, but they have something else they can exchange for money later.

I think *time spent on learning* is similar, but not all time spent is similar. Granted, there's no bank account for time, but you can spend time now so you get more of it later -- that's one of the reasons to learn and think, you - in essence - get more time in the future because you avoid making mistakes or being slower than you could be. In that sense it's like investing in productive capacity. There's a higher upfront cost, but you get a higher capacity and larger RoI than the alternatives. The choice to spend time learning ineffectively seems to me like spending some chunk of your factory budget on hookers and cocaine; fun at the time, but it's in opposition to the main goal.

Similarly, by analogy, learning skills that don't end up helping you, but learning them effectively, is like market risk. Not every investment makes a profit, but diversification helps, and the better you are the less you waste.

Time spent on things like downtime is different from normal money; that's more like $100 of food stamps you both get once a week and have to spend the same week. You might only be able to spend it at low-quality grocers, but avoiding spending it only hurts you.

An alternative thing about spending time on learning is trying to spend downtime doing pseudo-learning stuff. That's more like trying to invest your $100 food stamp (not going to go well). I find trying to do ~learning stuff when I'm tired etc. often means I stay up later, sleep worse, and have less high-capacity time for important things.


Max at 2:53 PM on September 15, 2020 | #18036 | reply | quote

Eating seed corn is like disassembling machinery for scrap metal, which is different (more destructive) than leaving it idle for a day (which sounds reasonably similar to spending a day of your time unproductively).


curi at 2:56 PM on September 15, 2020 | #18037 | reply | quote

#18037 yeah okay, I see what you mean. I've changed my mind on the quality of my analogy. (I don't think it's super bad or anything, just not as good as I originally thought.)


Max at 3:23 PM on September 15, 2020 | #18038 | reply | quote

Perimortem on intuitive response to #18037

My intuitive response (which would be put a bit defensively) is something like: disassembling machinery is like eating *all* the seed corn, and leaving it idle is like skimming a bit of corn off the top. Things keep working; there's still productivity and returns, but less than otherwise.

(note: I think this is valid, and it's why I don't think my analogy was all bad)

I think that intuitive response is wrong though. It's subtly moving the goal posts (similar to e.g. a "strategic" clarification), and would be expressing an idea like: "we're both right, we should blame miscommunication". That'd be dishonest though, because:

a) I didn't see some limits of the analogy that I do now - this contradicts the idea of miscommunication being a primary issue (it's not important if curi and I understood each other fully in every way; we understood each other sufficiently), and

b) the reasonable next steps from a miscommunication would be to figure out how to avoid it. Some miscommunications are due to like ~inferential distance but that doesn't make sense here. The easiest solution (if it really was miscommunication) would have been for me to be clearer originally. If I advocated that (and claimed I could have done it) I'd be pretending like there wasn't ever an issue; at the very least my lack of clarity would be an issue. Maybe I couldn't have been clearer for lack of knowledge, in which case it'd still be dishonest--and evasive--to claim a miscommunication b/c that wasn't the problem.

I don't know any way that my intuitive response would have been good, which is the reason I wrote this perimortem.

I'm not sure if putting the response in this perimortem is like a roundabout (and/or cowardly) way of trying to say the idea anyway. However, I think writing the perimortem is a better alternative than making the titular reply, so I'm satisfied for now.


Max at 3:23 PM on September 15, 2020 | #18039 | reply | quote

> I intuitively think that 'exploratory' style learning (with a high(er) error rate) has benefits.

Whether something is an error depends on your goal. If your goal is to get it correct, exploring works badly. If your goal is e.g. to get a rough overview, exploring works well.


curi at 9:42 PM on September 15, 2020 | #18044 | reply | quote

Max's postmortem on #18030 #18043 #18050

## Max's postmortem on #18030 #18043 #18050

IR wrote (addressing curi)

> i feel very much like i have gotten some of these ideas from you, but i dont know which things youve wrote that i got these ideas from. and i dont know how much ive changed them.

I asked IR:

> Otherwise, does it matter how much you've changed your mind?

which didn't make much sense. Context: #18030 #18043 #18050

I think 2 main things happened:

1. I wasn't careful when reading IR's comment, so missed important details / relations. (i.e. he was talking about changes to curi's ideas in his head, not changes to his own pre-existing ideas in his head)

2. I've been thinking recently about how my own ideas have changed over the last ~3 months.

(1) allowed me to ~*skip between trains of thought* without noticing. I ended up thinking about IR's comment in terms of (2). My question to IR makes more sense in this light.

Beyond the issue of miscommunication in general, there's a bigger problem I should care about and deal with. That is: responding earnestly to someone (usually) takes longer than reading what came immediately before. If I spend time responding to what I *thought* they wrote (but I'm wrong about that) then it's, in essence, wasted time. Maybe there are some benefits, but they're lesser than would be otherwise.

To avoid this sort of thing the obvious answer is reading stuff better. That doesn't feel super actionable tho b/c just concentrating more on ~*everything* I read is not v efficient, esp if this sort of issue isn't super common.

I could try re-interpreting what the person says, like re-writing out what I thought they meant before replying, but how would I know if that were right/wrong? It might make it clearer to me if I was *unclear* about what they thought. It doesn't help if I think I know what they meant and that idea is clear and consistent in my mind (as it was in this case).

This issue was - I think - that the reference "these ideas" is somewhat ambiguous (or maybe just tricky). I think IR's full sentence (expanding "them") is something like:

> and i dont know how much ive changed [my version of ideas I got from your ideas relative to the original ideas you wrote]

So, this might be a better sketch of what to do:

- recognise tricky references (ideally automatically)

- when tricky references occur, expand them out (there could be more than one possibility)

- criticise the possibilities so I get just one

- if I can't and it's ambiguous still, ask a clarifying question (listing the possibilities too)

- optionally respond to each possibility if short enough or easy enough

- if I get one and it's reasonable I can just respond

- if I get one and I'm not sure it's reasonable, ask a clarifying question and respond at the same time

the next step in this action-plan-sketch is "recognise tricky references (ideally automatically)". **The first part of that is introducing a breakpoint (in the coding sense) on tricky references.** I can do this a bit by paying more attention to references in general, trying to quickly figure out what they mean (and eval-ing if I know what they mean), and taking action if I don't. If I'm not 10/10 confident on the reference I should stop and investigate.

Okay, this feels like a decent PM and plan. Feedback welcome/appreciated. It was a bit trickier than normal to figure out what to do because a plan like 'learn2read' didn't feel good enough.


Max at 7:49 PM on September 16, 2020 | #18051 | reply | quote

>> I intuitively think that 'exploratory' style learning (with a high(er) error rate) has benefits.

> Whether something is an error depends on your goal. If your goal is to get it correct, exploring works badly. If your goal is e.g. to get a rough overview, exploring works well.

I hadn't considered this. It makes sense. That said, I don't think it's what I had in mind.

The italicised bits of this example are a bit of an outline.

An example is the route-finding-app I made for my SSOL speedrun: *I spent way too long* trying to get the PNG of the map as a background image behind the lines and points that get drawn. *Eventually I managed it* (after lots of different attempts and integrating bits of code I found online). *The main difficulty* was that the original author of the (simple) travelling salesman program used Haskell's GLUT library which is basically a *lowish-level* OpenGL lib (and *I'm not familiar* with low level opengl stuff). There are higher-level ones that make this stuff easy. *I only really cared about the outcome but it took way longer than I wanted it to.*

I didn't read a manual or in depth tutorial, instead tried to fumble my way though. That is sometimes faster. But you can't tell stuff like 'how long is left till I finish?' and other basic questions.

In some ways my process involved exploring as you describe. I toyed with the idea of switching to a higher level library, looked for higher level stuff that exposed/integrated with the lower level stuff (no luck), and read bits from the middles of some advanced/in-depth tutorials.

But, crucially, the exploring was a side-effect of a particular problem with the other bits. I'd say my choice of method when trying to get the PNG to draw on-screen was exploratory learning, so it's different to exploration as you describe (though somewhat related).

Eventually I found some code someone had written that was close enough to what I needed to make it work. There was a weird interaction with other code I'd written tho (involving drawing text), that meant the first line of text was the right size but all the other lines didn't appear on screen. I managed to fix that but it took another like 30 min of experimentation.

A better method - in hindsight - would have been to just do a tutorial for Gloss (an alternative opengl-based library, but much higher level) and recode what I'd already written, and the opengl bits that came with the app originally. I could have gone through enough of a tutorial on Gloss given the amount of time I spent (like 5hrs+).

I did learn other stuff during that time, but I didn't feel like the time was particularly well spent. I don't expect to use OpenGL + Haskell much in future, so it's not like this is particularly useful outside this one thing I wanted to do.

In some ways I do this stuff for the challenge, like thinking "I should be able to do this, so I will", but I don't think "I should be able to do this, eventually, but should I bother, or should I look for a different way to do the same outcome?"


Max at 8:10 PM on September 16, 2020 | #18052 | reply | quote

TCS and passions

I was thinking about a TCS issue yesterday. I have half a soln. It's about a child's passions

There's a possibly coercive idea I have that I think is the *common-er* version of the problem (maybe), then there's a more general version.

the possibly coercive version is like:

> I want my child to have a passion for maths (coercive), or

> I value passion about maths in general, and I want my child to be able to develop that if they want -- I don't want to *hinder* them (coercive?)

The second formulation feels like it could be done okay--without coercion--but I don't know enough to tell for sure.

I was thinking about this in the context of **a parent who's bad at maths**.

This made me think of a possible common issue *most* ppl would run in to if they tried TCS: *their skills/passions are inadequate (not broad enough and general enough) to avoid hindering the child.*

I think not being perfect is okay, but if we can avoid significant hindrance that's good.

One situation is if the child develops a passion for X but the parent isn't good/passionate about it, they can still buy equipment/supplies or hire tutoring or find a friend who's passionate, etc. This is the 1/2 solution I mentioned.

But more broadly, how do you facilitate the *development* of a passion before it's manifested?

One thing I was thinking about is when ppl have been passionate about something and sparked something in me. A good example is Haskell and type-safe programming; a guy at a technical meetup sold me on Haskell over a beer. It took me *years* before I actually used it in production, but I was sold in 20 min.

So exposing a child to a wide range of *passionate* people--who are probs the higher-value ppl to expose children to, anyway--is maybe one way, though that could be done corrosively. If you happen to be friends with passionate ppl and the visit and talk to your child, that feels different than like *engineering situations* to trick your child or something.

I haven't looked through the archive to see what other ppl have said on the topic, yet.


Max at 12:57 AM on September 19, 2020 | #18066 | reply | quote

correction s/corrosively/coercively

s/corrosively/coercively


Max at 12:59 AM on September 19, 2020 | #18067 | reply | quote

Quick thought on a secondary goal of life.

I think a good secondary goal for ones life--or maybe another primary goal as yesno supports--is to live without control. By that I mean: live so that you are both happy with your decisions consistently, and also make those decisions without willpower or self-control. All choices are somewhat like using self-control and some moral code; except that you have no animosity against those choices; they are choices you'd always make anyway. It's sort of like having no friction.

Ofc there will always be conflicts and problems to solve, but this state is like the closest you can get to that *and sustain*.


Max at 9:00 PM on September 19, 2020 | #18080 | reply | quote

Debate Topic (via Tutoring Max 44) -- Genes and direct influence over mind

> Genes (or other biology) don’t have any direct influence over our intelligence or personality.

I'm not sure about this. I don't think humans being universal understanders/explainers means genes *don't* have a direct influence over our mind/personality (esp. starting conditions). It seems reasonable that physical effects on the brain can have an effect on our mind/thinking (e.g. brain tumors, head trauma), and genes affect things in ways we don't fully understand, so there's room for them to have a direct effect.


Max at 7:14 PM on September 27, 2020 | #18153 | reply | quote

#18153 What sort of effect or influence do you have in mind, via what causal mechanisms?

For example, genes could make it so we're better at integer math than floating point math. I don't think this would cause someone to be more inclined to solipsism than an alien that excels at floating point math. And there could be variance among humans, but I don't think that would cause some people to be atheists.


curi at 7:15 PM on September 27, 2020 | #18154 | reply | quote

#18154

> What sort of effect or influence do you have in mind, via what causal mechanisms?

I'm not sure about this possibility, but it's a thing I've heard or seems to be a somewhat common idea:

- temperament: Say someone has a gene that means they produce lots of some hormone. That hormone makes them angry more often / more easily.

Does this sort of thing count as a direct influence over our personality? I can see a person like this 'learning to control' themselves or something, but I'm not sure exactly what you mean by directly influencing personality.

More broadly, I see room for unknown causal mechanisms, esp. relating to things that make sense to have evolutionary roles, like social stuff. I could see some genes play a role in how readily someone accepts static memes based around certain social signals (e.g. in group/out group stuff).

> For example, genes could ... but I don't think that would cause some people to be atheists.

I agree that there are ways genes could affect our brains at a lower level (like an instruction set affects CPU performance) and that this sort of effect isn't substantial.


Max at 7:28 PM on September 27, 2020 | #18155 | reply | quote

> - temperament: Say someone has a gene that means they produce lots of some hormone. That hormone makes them angry more often / more easily.

Hormones are low level. Behaviors and emotions are high level. It's kinda like suggesting that heating a room with a CPU in it might result in video game bosses attacking more aggressively. Low level changes do not cause high level changes that have the appearance of complex design unless there's a specific causal mechanism set up to enable this (e.g. sleep or volume button on a computer).

> Does this sort of thing count as a direct influence over our personality?

You could get annoyed more when hot or cold. Does that mean heat and cold influence personality? I think how one responds to heat, cold or hormones is part of what one's personality is. But they aren't controlling your reactions. The reactions are your choice based on your ideas.


curi at 7:31 PM on September 27, 2020 | #18156 | reply | quote

> Low level changes do not cause high level changes that have the appearance of complex design unless there's a specific causal mechanism set up to enable this

*A*: Don't we have a (rudimentary) explanation for hormones affecting thoughts, though? I know--personally--I think different things when in different moods (at least I think that's the case).

> I think how one responds to heat, cold or hormones is part of what one's personality is. But they aren't controlling your reactions. The reactions are your choice based on your ideas.

*B*: It feels like you're implying reactions are core to understanding personality, like the only way we can inspect personality is via its effect on our reactions.

I googled 'personality' and found a sensible-feeling definition about patterns of thoughts, feelings, and behaviours. Those are all based on ideas, so by that definition personality is just a collection of ideas.

----

I'm not sure if part A and B contradict each other. I'm not super happy with this reply but I think the result might be going to back to another part of the conversation.

PS, I labeled the paragraphs to refer to them, hopefully that made sense when reading.


Max at 7:49 PM on September 27, 2020 | #18157 | reply | quote

> Don't we have a (rudimentary) explanation for hormones affecting thoughts, though? I know--personally--I think different things when in different moods (at least I think that's the case).

Are you linking hormones to moods? You bring up something about hormones affecting thoughts but then the next sentence doesn't mention hormones.

> It feels like you're implying reactions are core to understanding personality, like the only way we can inspect personality is via its effect on our reactions.

I don't think that and I don't see how my text implied it.

> so by that definition personality is just a collection of ideas.

I agree with that.


curi at 7:51 PM on September 27, 2020 | #18158 | reply | quote

#18158

> Are you linking hormones to moods? You bring up something about hormones affecting thoughts but then the next sentence doesn't mention hormones.

Yes. I think most ppl presume a super tight relationship between them. That doesn't seem right--thinking about it now.

*Some* effect might be there, but that's like a transition between levels of emergence, and probably means I don't have a point here.

Going to drop this angle for the moment.

> > It feels like you're implying reactions are core to understanding personality, like the only way we can inspect personality is via its effect on our reactions.

> I don't think that and I don't see how my text implied it.

Given you agreed with "personality is just a collection of ideas" I'm not sure this is important to discuss unless you think so. I can explain why I thought the implication was there if you want.

**concluding comment**: I think I agree with you that hormones don't influence personality/thoughts in a substantial way (I think you agree with that at least).

I think at this point it's up to me to come up with some other causal mechanism? Or the only other node on my conversation tree I have to look into atm is mine about unknown causal mechanism.


Max at 8:27 PM on September 27, 2020 | #18160 | reply | quote

> Going to drop this angle for the moment.

Do you think your made an error? If so how'd that happen?

> I can explain why I thought the implication was there if you want.

Yes I'm curious.

> I think at this point it's up to me to come up with some other causal mechanism?

That's an option. Another is I could play devil's advocate and take the other side of the matter. Another is you could ask questions or think about stuff like how reacting to a hormone differs from reacting to an event like a sick parent, winning a competition, getting a high or low grade, etc. Our emotions and moods are causally connected to all sorts of things but the basic point is the connection is governed by our ideas: we can decide how to react to a particular event and if we had different ideas we'd react differently. The hormone/genes/etc ppl are claiming roughly that something different/special is going on in their case. Having a clearer idea of what the claim is helps with evaluating it.


curi at 8:31 PM on September 27, 2020 | #18161 | reply | quote

> Do you think your made an error? If so how'd that happen?

Yes, will do a postmortem in a different post.

> Yes I'm curious.

Cool, will also put this in a diff post because it feels off-topic.

> That's an option. Another is ...

I want to take a bit to think about where to go from here. I didn't really consider how many possibilities there were. Some of those options I might be able to follow myself (like a thought experiment) to see where they lead.


Max at 9:05 PM on September 27, 2020 | #18164 | reply | quote

BTW, @curi, I think it was good we didn't do the Bitcoin option today. This feels (and felt) valuable even though I don't think of myself as anything close to an expert


Max at 9:06 PM on September 27, 2020 | #18165 | reply | quote

Thought on why FI is special

I've been thinking about why FI is special/different. It's related to the general topic of FI and new ppl, their reactions, etc.

curi said in Discord:

>> [12:14 AM] Laozi Haym: it isn't anything new i need to watch what I say, I just ...was watching what I was saying 2 days ago on here

>

> if you're mistaken about something it's better to say it and get criticism rather than hide the error. so i generally don't like people watching what they say. and feeling pressured about it sucks too.

>

> sometimes people try to say only their highest quality ideas but they don't go through life using only those. most of the time they're not at their best. what you do when you're tired or distracted is part of your life, and should be exposed to criticism too.

I think part of FI being different is to do with the culture related to things like "if you're mistaken about something it's better to say it and get criticism", and "what you do when you're tired or distracted is part of your life, and should be exposed to criticism too."

When people come to FI they don't expect other parts of their life (maybe implied by things they said) to be questioned. It's doesn't adhere to normal social norms. That's--in part--b/c those ppl and normal social norms don't value stuff like: error correction, every person and discussion being a potential beginning of infinity, the capacity for ppl to make progress (esp rapidly), etc. There is some lip-service paid to these ideas, and they're taken somewhat seriously in dire situations, but they're not like culturally ubiquitous, common, or expected.

That lip-service is part of the reason pointing out those things individually doesn't work to differentiate FI; everyone says it, and everyone says they're honest. But the culture is different; what's tolerated, what's expected, what's prioritized, what things are seen as important.

Even that paragraph doesn't work outside this sort of context. I don't expect it would convince anyone who didn't already understand it (at least: understand it enough to know what I was trying to get at and whether I had mistakes, etc).


Max at 12:42 AM on September 28, 2020 | #18168 | reply | quote

#18161

>>> I think how one responds to heat, cold or hormones is part of what one's personality is. But they aren't controlling your reactions. The reactions are your choice based on your ideas.

>> It feels like you're implying reactions are core to understanding personality, like the only way we can inspect personality is via its effect on our reactions.

>> I can explain why I thought the implication was there if you want.

> Yes I'm curious.

I think this was what I was thinking:

- response to stuff like heat is a part of one's personality

- the stimulus doesn't control your reactions

- reactions are choices based on one's ideas

- so there's a chain like: stimulus -> physiological signals -> interpretation (ideas) -> meaning (ideas) -> choice of behaviour (ideas) -> response/reactions

- personality is included in this chain only via the links that have `(ideas)`

- we can't see the ideas, but we can see the response/reactions (the outcome)

- (premise) to understand someone's personality we need things we can study / think about

- the reactions and stimulus the only parts of that we can easily agree on without like inference/explanation

- stimulus doesn't tell us about personality

- reactions and response do, though

- so reactions are key to understanding personality


Max at 6:33 PM on October 1, 2020 | #18185 | reply | quote

Postmortem on hormones-mood link

>>>> Don't we have a (rudimentary) explanation for hormones affecting thoughts, though? I know--personally--I think different things when in different moods (at least I think that's the case).

>>> Are you linking hormones to moods? You bring up something about hormones affecting thoughts but then the next sentence doesn't mention hormones.

>> Yes. I think most ppl presume a super tight relationship between them. That doesn't seem right--thinking about it now.

>> *Some* effect might be there, but that's like a transition between levels of emergence, and probably means I don't have a point here.

>> Going to drop this angle for the moment.

> Do you think your made an error? If so how'd that happen?

### postmortem

I implied mood and hormones were linked. I didn't explicitly mention it.

When curi pointed out I linked them I realised that I was presuming an intimiate relationship and that I didn't have a good explanation for it.

There's a ~common idea that they're intimately linked. I think, in general, it's a good way for ppl to avoid taking responsibility for their reactions to stuff. e.g. women a more irritable on their period and so they shouldn't be held to as high standards / ppl should be more forgiving of them getting upset / etc. This is roughly called a 'mood cycle', which is explicitly linked to hormonal cycles of the same length (I've heard 28 days for women and 33 days for men).

When curi pointed out my linking hormones and moods I thought about the common idea and questioned it. I didn't question it when I first used it though. Why didn't I question it?

Intuition: In general when we're thinking about something particular there are ideas that are 'in the front' of our mind and other ideas 'in the back' of our mind. We are actively engaging with the 'front' ideas, but not the 'back' ideas. (Maybe the 'back' ideas could be called background knowledge but that term feels like it describes a slightly different thing.) To question an idea it needs to come to the 'front'. It's sort of like a module of code: we interact with the API but we don't interact with the internal logic. When ideas are at the 'front' we're looking at the internal logic and API, but at the 'back' we're only looking at the API. We use shortcuts to know how ideas at the 'back' interact with stuff.

So by that intuition: I had the hormones->mood link in the back of my mind and didn't think about the internal logic until curi brought it to the 'front' by pointing it out.

I'm a bit worried that this is just a long winded way of saying something like 'lazy thinking', but it feels like there's probably more to it, so I'm okay with it for the moment.

One of the ways I could avoid this is by categorizing old 'background' ideas (like the hormones-mood link) as stuff I need to reconsider before using. In some ways it doesn't matter much if I get ~lots better at thinking WRT 'front' ideas, but keep using bad ideas as foundations without questioning them. So I need to make a habit about questioning ideas I use as a foundation if I haven't considered them since improving my thinking. There are practical limits on this, like lots of my preexisting ideas are fine (or at least fit-for-purpose at the time) and reconsidering them consistently would be significant overhead. If I'm using ideas as part of my reasoning, though, that's a good reason to reconsider them, at least briefly.


Max at 6:55 PM on October 1, 2020 | #18186 | reply | quote

#18185 Be careful with complex interpretations of other people. Often you should check if they agree instead of assuming you got it right. And I don't think I said stuff that corresponds to your "core" or "only way".


curi at 7:48 PM on October 1, 2020 | #18187 | reply | quote

#18187

> Often you should check if they agree instead of assuming you got it right.

I think I was trying to do that with:

>> It feels like you're implying reactions are core to understanding personality, like the only way we can inspect personality is via its effect on our reactions.

If that wasn't clear, is there a good way to do it better? I could explicitly say "to check I have this right, are you implying ... ?". That feels cumbersome though.


Max at 7:58 PM on October 1, 2020 | #18188 | reply | quote

#18186 ok so how would you revise your original claim:

> Don't we have a (rudimentary) explanation for hormones affecting thoughts, though? I know--personally--I think different things when in different moods (at least I think that's the case).

(you may want to grab more text/context to also revise)


curi at 7:59 PM on October 1, 2020 | #18189 | reply | quote

#18188 "feels like" is kinda vague but generally (when there aren't clear emotions involved) reads similar to "i think". i don't read it as a question or requesting confirmation.

A question version at around the same length is:

> Are you saying reactions are core to understanding personality, like the only way we can inspect personality is via its effect on our reactions?


curi at 8:01 PM on October 1, 2020 | #18190 | reply | quote

#18161

>> I think at this point it's up to me to come up with some other causal mechanism?

> That's an option.

I have a few ideas for casual mechanisms:

* genes encode some ideas which are 'given' to us early in life

* so there could be flow on effects

* this isn't really a _direct_ influence on thoughts, though.

* or maybe: ideas have different classes of components e.g. like ideas about 'relationships between people' are one of those possible components. if there are optimisations the brain has that directly relate to some phenotype (like volume of that brain-part) then the weighting between generation of idea-components could differ thus ppl with certain genes are more likely to think of certain stuff.

* note, after I wrote "ideas have different classes of components" I strongly questioned why I wrote that, I don't think I have a good reason. I think that is reflected in the following 2 points:

* but we don't know anything about these idea-component things

* so this 'causal mechanism' just kicking the can down the road by introducing another unknown casual relationship as part of this explanation

So I don't think I have any good ideas for casual mechanisms.

I don't think I could convince myself that genes have a direct influence over our thoughts. But I can't convince myself they *don't*, either. I can convince myself that I shouldn't believe they do.

I'm open to other ways to move the conversation forward if you have ideas.


Max at 8:03 PM on October 1, 2020 | #18191 | reply | quote

Formatting:

The dotpoints above are in this hierarchy:

- 1

- 1.1

- 1.1.1

- 2

- 2.1

- 2.2

- 2.3


Max at 8:04 PM on October 1, 2020 | #18192 | reply | quote

> * genes encode some ideas which are 'given' to us early in life

Consider a gene pool of, say, wild dogs. Using nanobots, you tinker with it. You sterilize or kill some dogs, or manufacture others, or whatever. You don't make huge changes. You just change the initial conditions. Then you leave the dogs alone for 100 generations.

Do you expect the tinkering to change the end results much? In general I don't. The selection pressures of the environment will control the results. E.g. if you make the dogs have more fur on average, but it's a warm climate, then I think they'll end up with less fur anyway.

Similarly, I don't think the initial ideas in the brain matter a lot. Make sense?

Another pov is you can build ruby on C or java foundations and have the same language. Once you add a few layers of abstraction over the initial functions/APIs/whatever, then the details of them end up not mattering (unless e.g. they were really broken or manage to cause ongoing performance issues).


curi at 8:06 PM on October 1, 2020 | #18193 | reply | quote

> I don't think the initial ideas in the brain matter a lot.

for clarity: so you think it is possible we have ideas encoded in genes that are given to ~everyone during prenatal development (or shortly after birth, w/e)?

the idea that the *initial ideas* in the brain don't have any long term significance on our thoughts (and genes can give us some initial ideas) is a stronger and different position than I thought you had.


Max at 8:12 PM on October 1, 2020 | #18194 | reply | quote

#18193 I had in mind a dog geneticist who just sorta screwed around a bit.

If he specifically tries to cause a specific result, and puts a bunch of creativity and scientific study into figuring out what changes will cause it, then he might manage to cause it. If he can predict the environment and what'll happen evolutionarily, he might figure out what to do to the gene pool to get a specific feature to be present 100 generations later that wouldn't be present otherwise.

Does biological evolution put that kind of major design effort into controlling high level human ideas like whether someone is an inductivist? No. It doesn't even have knowledge of those things (like induction), let alone knowledge of the whole future memetic selection pressures and evolution of ideas and creation of layers of abstraction and so on that'll happen from ages 0-25. To cause being an inductivist at age 25 would require not only knowledge of inductivist (as expressed in an appropriate framework that makes sense in our present day culture), it'd also require knowledge about that whole childhood and education process and how to manipulate and control it.

How could genes do all that? And even if they theoretically could, there were no selection pressures to cause them to do it in general. You can pick tons of ideas – like that painting is better than sculpture, or that math tests should ban calculators, or that Uber should be allowed into cities immediately despite complaints by taxi drivers – and it makes no sense that genetic evolution would have set things up to control that. Maybe you could try to come up with a few special cases and an explanation of a causal mechanism, but the standard thing is no causation like this.


curi at 8:14 PM on October 1, 2020 | #18195 | reply | quote

#18194 I think our genes set us up with adequately powerful and generic hardware + OS + maybe some initial default apps that are replaceable. I don't think these end up mattering that much cuz of choice, abstraction layers, and universality – no missing features/capabilities. As far as their ability to bias us in a particular direction (as in variance in these things between people could make some people more mathy and others more artsty, or some people more angry and some more calm), while it's not exactly zero, I think it's tiny compared to how much culture and childhood and thinking about stuff matters. It's just a drop in the ocean. (This is also DD's position btw.)


curi at 8:17 PM on October 1, 2020 | #18196 | reply | quote

> stronger and different position than I thought you had.

What did you think I thought and what's the difference?


curi at 8:17 PM on October 1, 2020 | #18197 | reply | quote

#18196 And I don't think the variance between people is anything like intel vs ARM chips or windows vs. linux OS. Even that isn't such a huge deal, but genes created a particular hardware and OS design and variance is limited to be more minor and not break things. Variance isn't gonna be so huge as to create a drastically different design.


curi at 8:19 PM on October 1, 2020 | #18198 | reply | quote

#18154

> What sort of effect or influence do you have in mind, via what causal mechanisms?

I'm not sure about the causal mechanism, just that this is *an* effect and it's argued that it happened via evolution at the gene-level.

I think I might have some counterexample to the idea that genes don't play a significant role in thoughts. It's part of a bigger idea, though. I'll try and outline relevant parts of the video.

(I've bolded the key phrases)

- Lindybeige has a **theory on why women have breasts**

- He **explains why other theories aren't sufficient** (e.g. there's one idea that women have breasts to signal fertility, etc, and that theory compares humans to other animals like primates; this is refuted b/c other species have no *permanent* signs of fertility)

- There's a bit about the **EEA (Environment of Evolutionary Adaptedness) and evolutionary context** / selection pressures / social dynamics at the time (social dynamics here means like 'dynamics of hunter gatherer society')

- There's a (conjectured) **chain of reasoning and events** he goes through in early (modern) homo sapien development involving **secret menstruation and how sexes would 'react' for evolutionary advantage**

- part of that conjecture is **male reaction to sexual signals ~flipping** to avoid being unattracted to fertile women

- and this eventually ends with women having permanent breasts

It's that second to last part about male reaction ~flipping that I think might be a counter example.

The video: https://www.youtube.com/watch?v=oWkOvakd9Mo

The reason I think it's a counter example is that this would be a way genes significantly changed thoughts. (assuming ideas like 'she's attractive' and 'she's not attractive' fit the bill for what we're considering.)


Max at 8:27 PM on October 1, 2020 | #18199 | reply | quote

> - part of that conjecture is **male reaction to sexual signals ~flipping** to avoid being unattracted to fertile women

The idea of ~flipping is roughly:

- animals are attracted to symbols like swollen breasts / butt, particular inflammations, temporary colouring, etc.

- animals (all but humans) don't have breasts when they don't need them. They only grow them when necessary, and they're not swollen at other times

- modern women have ~swollen breasts *all* the time (there's some difference between lactating/not lactating but it's minor compared to other animals)

-- maintaining breasts costs resources, there's an evolutionary reason not to do it

- the male reaction to swollen breasts is to *not* be attracted b/c it means the female isn't fertile (this is true in other animals)

- human males around the time women developed permanent breasts had this reaction too (along with other things like fatter -> good -> more resources / better chance of children surviving)

- one evolutionary reaction could have been to like fix the 'pattern' for what males found attractive (e.g. breasts -> good now, fatter -> still good)

- but the *simplest* change necessary is just a binary 'not' - i.e. things that weren't attractive now are, and things that were attractive aren't

-- admittedly (thinking about it now) why didn't humans die out because malnourished women were selected over non-malnourished?

- so males had this gene flipped by evolution and breasts were attractive now

This sounds like a way genes had (and have) a significant role in thoughts.

Possible criticism: this is just an idea we get when we're young and some people change it, some don't, but it doesn't mean genes have a *substantial* role in affecting thoughts, just that like this one inborn(?) idea is different.

I marked inborn with a (?) because I'm not sure I'm using it right.


Max at 8:37 PM on October 1, 2020 | #18200 | reply | quote

> The reason I think it's a counter example is that this would be a way genes significantly changed thoughts.

It's useful to think through what sorts of genetic effects on thoughts are important and why.

E.g. being tall correlates with the thought "I like basketball" or "I want to be in the NBA" at age 25.

Genes did not evolve to have knowledge of basketball or the NBA. Height genes are just about height.

The causality here is cultural. Culture reacts to (partially) genetically controlled traits like height.

Similarly, culture has some reactions to e.g. hair and eye color, which genes have substantial control over (barring bleach, dye, colored contacts, etc).


curi at 8:38 PM on October 1, 2020 | #18201 | reply | quote

#18200 So once upon a time humans were animals. Apes or something. Not yet intelligent. And they had behaviors controlled by genes just like cats do.

Did humans get permanent breasts then or later (after intelligence)? I'm not clear on the claim/story yet.

Anyway, later, humans become human/intelligent. Then they have memes. And memes start taking over control of lots of stuff including sexual preferences, courtship behaviors, etc. Memes evolve faster than genes and have access to better control over adult humans – ideas are in a better position to effect behavior than protein design at ~birth is.

If humans evolved permanent breasts before memes, there's no real issue, right?

If humans evolved permanent breasts after memes, that'd be more complicated. Does Lindybeige claim or address that?


curi at 8:43 PM on October 1, 2020 | #18202 | reply | quote

#18200 Overall, you or we could go into more detail on this example, but maybe you'd be content to consider in enough of an unknown, with lots of uncertainty, that it's no reason to reject a model of how intelligence/minds/genes/etc work. I don't see that it's very important to look into this particular example more.


curi at 8:48 PM on October 1, 2020 | #18203 | reply | quote

> If humans evolved permanent breasts before memes, there's no real issue, right?

Agreed

> If humans evolved permanent breasts after memes, that'd be more complicated. Does Lindybeige claim or address that?

I can't find a reference to dates more specific than ~last 2.5 million years (the Pleistocene). If he did mention a more specific date I don't recall it and can't find it via some quick searches.


Max at 8:48 PM on October 1, 2020 | #18204 | reply | quote

> maybe you'd be content to consider in enough of an unknown, with lots of uncertainty, that it's no reason to reject a model of how intelligence/minds/genes/etc work. I don't see that it's very important to look into this particular example more.

Yeah, I'm content to do that. It's not clear it's a counter example (and even if it were there are lots of issues/unknowns still)


Max at 8:49 PM on October 1, 2020 | #18205 | reply | quote

The discussion about genes and intelligence above is discussed and written in:

https://youtu.be/BDwiP4lsC_4

and

https://youtu.be/1J6ECV9L11g


curi at 9:05 PM on October 1, 2020 | #18206 | reply | quote

Conversation tree so far (recent entries less refined than older ones): https://maxkaye.s3.amazonaws.com/2020-09-28-curi-genes-int-tree-exported-2020-10-05.pdf


Max at 6:55 PM on October 4, 2020 | #18230 | reply | quote

Tangent near the end of a patio11 thread:

https://twitter.com/patio11/status/1315157487633354753

> Lots of cryptocurrency projects think that there is a way for any part of their ecosystem to be done by non-professionals in the long-run and they are all fools.

> Miners, devs, promoters, capital, etc, will all be professionalized.


curi at 12:50 AM on October 11, 2020 | #18280 | reply | quote

#18280 I agree. There's a lot of naivety around 'decentralisation'.

There are caveats, though. Increasing the accessibility of some previously professionalised thing (e.g. arbitrage) can result in more people do it--and at lower volumes. But, in those cases, the professionalisation is just moving from the person doing the thing to programmer(s) maintaining the feature.


Max at 2:48 PM on October 11, 2020 | #18288 | reply | quote

(Tutoring Max #49) There are no conflicts of interest between rational men.

Talking with curi during Tutoring Max #49

Topic: There are no conflicts of interest between rational men.

----

## rough brainstorming

idea seems to be

- if people want to do good / make progress / improve something

- then that has to be compatible with objective reality

- reality is such that we can't choose the right path to make progress

- rational people will focus on a goal (which is not doing harm to someone particularly)

- and the method to get that goal has to be compatible with objective reality

... (idea feels unclear so I'm swapping brainstorming topic)

'possible' solutions

- violence

- compromise

- 'winner' pays 'loser'?

- auction -> one person no longer wants the job?

----

What is the scenario, what is the conflict, and why is it not fixable?

## scenario

Alice and Bob both want a particular job. They are both suitable applicants. There's only one job, so at most only one of Alice,Bob can get the job.

## conflict

Alice/Bob are competing for a scarce resource. They might think that their life would be worse if they didn't get the job.

## fixableness

There are ways to fix it by introducing e.g. another position like the first, but is it fixable without introducing stuff?

Alice/Bob could talk and one could persuade the other it'd be better not to have it.

Fixableness has a time constraint -- knowing a solution might be available in the future doesn't help the problem now.

So for it to be 'fixable' we'd need a solution that generally applies to all situations like this, and we need to be able to apply the solution right away.


Max at 7:31 PM on October 15, 2020 | #18320 | reply | quote

#18320 It's important to think about scenarios in reality. Say the business owner, Joe, wants to interview both Alice and Bob, and then wants to hire Alice not Bob.

In what scenario does Bob get the job? What series of events? What exactly does Bob want to be different (or or in the recent past) and by what means would that change be achieved?


curi at 7:33 PM on October 15, 2020 | #18321 | reply | quote

#18321

> It's important to think about scenarios in reality. Say the business owner, Joe, wants to interview both Alice and Bob, and then wants to hire Alice not Bob.

This sounds like a situation where, if Bob knew Joe's thoughts, he shouldn't want the job. If Joe's already made up their mind, wouldn't that be a reason for Bob to spend efforts on other opportunities?

> In what scenario does Bob get the job? What series of events? What exactly does Bob want to be different (or or in the recent past) and by what means would that change be achieved?

Bob get's the job if Joe changes their mind, or Alice finds another job (or otherwise withdraws).

Joe might change his mind if s/he finds out something bad about Alice, or if it turns out Joe's idea of Alice was wrong. There could be lots of ways that happens, but it's not something that can be relied upon. Joe might also learn something new about Bob.

Generally it seems like either Joe or Alice would need to change their mind or learn something new for things to end up with Bob getting the job.

Bob wants Joe's opinion to change (the opinion that Alice is the better one to hire). Bob could do a really good interview and persuade Joe -- or something like the above could happen.

I guess something unexpected could happen too (like Alice getting hit by a bus) but I don't think Bob wants that so it seems pointless to expand on.


Max at 7:41 PM on October 15, 2020 | #18322 | reply | quote

#18322 It’s in both job seeker’s interests that jobs are given out according to the capitalist system where the business owner or his proxy decides who to hire. If he hires Alice, there’s no way Bob could have that job other than if a different system were in place. But that system would make everyone much worse off including Bob because it’d involve limitations on freedom, government meddling in the economy, pointing guns at people to get jobs from them, or something else bad.

People commonly have mutual interest that something is decided by a certain method which has good traits like being fair, free or rights-respecting. That a particular outcome goes against me doesn’t mean it’s in my interest to change the system itself. With capitalist hiring, I’m much better off applying for some other jobs than living in a society without a capitalist economy.


curi at 7:42 PM on October 15, 2020 | #18323 | reply | quote

If Joe is bad at hiring, that may be bad for me. I may get a worse result. But it's bad for him too. This isn't a conflict between me and Joe. He's trying to deal with life and hiring well. If he's doing it poorly, that's due to ignorance, lack of skill, etc., not due to what benefits Joe and what benefits me being in conflict.


curi at 7:44 PM on October 15, 2020 | #18324 | reply | quote

# 18323

> It’s in both job seeker’s interests that jobs are given out according to the capitalist system where the business owner or his proxy decides who to hire. If he hires Alice, there’s no way Bob could have that job other than if a different system were in place. But that system would make everyone much worse off including Bob because it’d involve limitations on freedom, government meddling in the economy, pointing guns at people to get jobs from them, or something else bad.

Re particularly:

> If he hires Alice, there’s no way Bob could have that job other than if a different system were in place.

One way Bob could have the job is if Joe had better ideas -- in the case Joe has mistakes in his thinking. That seems like it'd be compatible with the same system. If we're presuming Joe is rational, isn't that a somewhat high bar? I'm not sure everyone could measure up to it.

> But that system would make everyone much worse off [...]

I agree for lots of these possibilities. Systems that use violence to enforce rules on this sort of thing would be bad.

> People commonly have mutual interest that something is decided by a certain method which has good traits like being fair, free or rights-respecting. That a particular outcome goes against me doesn’t mean it’s in my interest to change the system itself. With capitalist hiring, I’m much better off applying for some other jobs than living in a society without a capitalist economy.

* this sounds like approximately: principles trump circumstance

* * it's better to be working within a good system than profiting in the short term from a bad system, even if a circumstantial outcome is superficially less good for you.

I agree with: a world with short term good outcomes from a bad system is worse than a world with a good system.

Do you think there are any other methods by which jobs could be handed out? Does Joe having better ideas count as another method?


Max at 7:53 PM on October 15, 2020 | #18325 | reply | quote

#18324

> If Joe is bad at hiring, that may be bad for me. I may get a worse result. But it's bad for him too. This isn't a conflict between me and Joe. He's trying to deal with life and hiring well. If he's doing it poorly, that's due to ignorance, lack of skill, etc., not due to what benefits Joe and what benefits me being in conflict.

Okay, I see how this answers the idea that Joe's ideas have something to do with a conflict of interests. It'd be in both your interests for Joe to be better at hiring if he was bad at it. But Joe can't magically get better. So Joe just is what he is in that role. It's better he make a free choice than be coerced or something. So any alternative system that coerces him is worse, and in any system where he has a free choice he'd act roughly the same anyway.


Max at 7:58 PM on October 15, 2020 | #18326 | reply | quote

> One way Bob could have the job is if Joe had better ideas -- in the case Joe has mistakes in his thinking. That seems like it'd be compatible with the same system. If we're presuming Joe is rational, isn't that a somewhat high bar? I'm not sure everyone could measure up to it.

Yes but having better ideas is also in Joe’s interest. The problem here is that good ideas are hard to come by and people aren’t perfect, not that Joe prefers bad idea. So it’s not a conflict of interest. I also commented on this in #18324 which I don’t think you saw yet.

>> But that system would make everyone much worse off [...]

> I agree for lots of these possibilities. Systems that use violence to enforce rules on this sort of thing would be bad.

>> People commonly have mutual interest that something is decided by a certain method which has good traits like being fair, free or rights-respecting. That a particular outcome goes against me doesn’t mean it’s in my interest to change the system itself. With capitalist hiring, I’m much better off applying for some other jobs than living in a society without a capitalist economy.

> * this sounds like approximately: principles trump circumstance

> * * it's better to be working within a good system than profiting in the short term from a bad system, even if a circumstantial outcome is superficially less good for you.

How would Joe profit from a bad system?

If the system is e.g. you use bribes to get a job, then maybe he'd get this particular job (or maybe Candice or Dillon would get it, who knows). But he'd certainly run into the problem of "someone beat me out for the job I wanted" in a bribery-based system.

It's the same with a system of favors and friendships. It's hard for Bob to know he's the best connected applicant this time, even if he knows he has a stronger social network than Alice. And even if he would have gotten this job under that system, he'd miss out on others. It wouldn't solve the problem of Bob not getting every job he applies for.

Bob, if he's bitter, may not understand the purpose of having job applications. Why have more than one person apply for a job opening that available only to one person? The point is to try to use some objective tests to find a good candidate. If Bob doesn't want that to happen, then he's giving up on earning jobs by merit as a lifestyle. And he's imagining a world where, what, only one person is allowed to apply for each job? What's that even mean? The King just tells you what job you can have? Or first come first serve?

> I agree with: a world with short term good outcomes from a bad system is worse than a world with a good system.

I doubt that any of the general purpose systems like "bribery" or "favors" for assigning jobs actually would offer Bob all the jobs he wants. They might well fail to give Bob this particular job. They might well not only deny Bob this job but make it much harder for him to find an alternative one.

But those are generic, principled systems, even if the principles suck. What about a biased system? What about a system where Bob is in charge of everything? Would *that* be in Bob's interests? Should people want to be a king?

> Do you think there are any other methods by which jobs could be handed out? Does Joe having better ideas count as another method?

I don't know a better system than capitalism/freedom/property-rights/etc.


curi at 8:00 PM on October 15, 2020 | #18327 | reply | quote

#18327

>> I agree with: a world with short term good outcomes from a bad system is worse than a world with a good system.

> I doubt that any of the general purpose systems like "bribery" or "favors" for assigning jobs actually would offer Bob all the jobs he wants. They might well fail to give Bob this particular job. They might well not only deny Bob this job but make it much harder for him to find an alternative one.

> But those are generic, principled systems, even if the principles suck. What about a biased system? What about a system where Bob is in charge of everything? Would *that* be in Bob's interests? Should people want to be a king?

I think it's rational to want systems which can be agreed upon by everyone. Sort of like a 'lowest common denominator'. I don't think rational people want a system that's unfair--like Bob being in charge of everything.

I don't think people should want to be a king. One reason is that if I wanted to be a king, and was willing to do necessary things to achieve that, then I should expect other people to do so too. That just ends in violence, etc. Another reason is that if we were all kings it would be like having a billion city states, which would suck b/c we'd end up like subsisting.

There are reasons based on principles too, like being a king means using force to get your way, which is bad. But not everyone agrees on those. I think people more generally agree on practical stuff like 'if we all did that we'd all have nothing'. That's why I chose to write the two practical reasons.

>> Do you think there are any other methods by which jobs could be handed out? Does Joe having better ideas count as another method?

> I don't know a better system than capitalism/freedom/property-rights/etc.

I guess *all* other systems have to be better or worse than that. There's no orthogonal direction. I'm unsure if there are things to consider other than what we already did: stuff that looks like a conflict but isn't (e.g. Joe's ideas), and alternate systems for distributing jobs.


Max at 8:12 PM on October 15, 2020 | #18328 | reply | quote

#18328 If Bob wants to be King, then he isn’t concerned with mutual benefit. He’s creating conflicts of interests by pursuing policies to benefit himself at the expense of others. This will result in rebellion. It gives people incentive to kill, exile or imprison Bob. It gives people incentive to work against Bob, undermine him, and make his life harder. This is actually worse for Bob than peaceful, harmonious capitalism would be.

And if Bob is to be King, how will he achieve it? A violent revolution in which he might perish or be betrayed by one of his lieutenants who wishes to be King himself?

And if Bob is already King, how does he stay in power? Secret police? Dictators often die. It’s a risky job. And if one has the skill/luck/capability to win the contest for dictator, why not put those same energies into a business instead? Bob could have been better off as a billionaire than a dictator. In general, even when crime pays, it pays less than the market rate for all the work/skill/risk it takes. Because it’s easier to make a profit when you collaborate with people than when you fight with them. It’s easier to profit when other people’s actions are helping you and making you more successful than when their actions are working against you and subtracting from your success.

And being a violent dictator or criminal leader requires rationalizing that to yourself and thus alienates you from reason and good ideas.


curi at 8:12 PM on October 15, 2020 | #18329 | reply | quote

#18328

> Another reason is that if we were all kings it would be like having a billion city states, which would suck b/c we'd end up like subsisting.

I forgot to mention: the other option with all of us being kings is basically capitalism/freedom/property-rights/etc, anyway.


Max at 8:14 PM on October 15, 2020 | #18330 | reply | quote

I'm stuck on something. It's like there are two ideas that feel circular but the oppose each other.

I'm worried there's like a tautology / circular reasoning b/c of the 'rational men' thing. Wouldn't rational men always agree on things (eventually) anyway? So the system doesn't have anything to do with the lack of conflict. But people often aren't rational, so doesn't that mean there might be a system which is better than capitalism?

self-commentary: saying *people aren't rational -> there could be something better than capitalism* is circular b/c the idea of something being better than capitalism was the reason for saying ppl aren't rational.

(note: I'm not really sure this is circular but I'm getting too hung up on it)


Max at 8:29 PM on October 15, 2020 | #18331 | reply | quote

#18331 I don’t think the “rational” qualifier is required anymore than the sometimes-used “long term interests” qualifier. It’s in people’s best interests to be rational and to consider their long term interests not merely the short term.

The liberal claims re harmony of interests don’t rely on unlimited knowledge. They are not like “if men knew everything, there’d be harmony”. They are about avoiding conflict now. Understanding why you shouldn’t hate competitors for a job is achievable today given currently available knowledge.


curi at 8:29 PM on October 15, 2020 | #18332 | reply | quote

#18332

> #18331 I don’t think the “rational” qualifier is required anymore than the sometimes-used “long term interests” qualifier. It’s in people’s best interests to be rational and to consider their long term interests not merely the short term.

Yeah okay. The next thing I started thinking about was whether there was a conflict of interests between ppl who try to be rational but aren't perfect.

I'm not sure bringing systems in to the discussion is necessary to make the main point. Like: if you pursue rational choices then there aren't any deal-breaking conflicts you have with anyone else who pursues rational choices. That seems fairly self-evident.

> The liberal claims re harmony of interests don’t rely on unlimited knowledge. They are not like “if men knew everything, there’d be harmony”. They are about avoiding conflict now. Understanding why you shouldn’t hate competitors for a job is achievable today given currently available knowledge.

Hmm, maybe systems are necessary to bring in to it. Like if two people are pursuing rational choices but think there's a conflict, then there needs to be some rules by which they evaluate the situation. The system is like the equilibrium everyone can agree on, and since there's only one: it's special.

I'm not sure I'm properly understanding it, though.


Max at 8:36 PM on October 15, 2020 | #18333 | reply | quote

#18333 Can you come up with some other scenarios, besides competing job applicants, with some sort of apparent conflict of interest?


curi at 8:37 PM on October 15, 2020 | #18334 | reply | quote

Liberalism/capitalism allows people to live in a commune and share stuff if they want to. There are many rival ideas about the best ways to live in a peaceful world but those are sub-types of liberalism. The standard terminology is that liberalism is the system of peace and freedom, and its rivals are the systems that reject peace and social harmony in some way.


curi at 8:39 PM on October 15, 2020 | #18335 | reply | quote

#18334

> Can you come up with some other scenarios, besides competing job applicants, with some sort of apparent conflict of interest?

* one banana tree but two hungry people (and not enough bananas)

* multiple candidates running in the same election

* rich guy in a suit walking past drowning person (I'm not sure about this one)

* limited edition consumer goods

* competing for entry into a tournament (like the tetris world cup where the top 50 ppl go through)

* two kids who want particular gifts but their parents don't have enough money for both gifts


Anonymous at 8:43 PM on October 15, 2020 | #18336 | reply | quote

#18336 OK and can you provide solutions to those? Why isn't each one a conflict of interest?


curi at 8:43 PM on October 15, 2020 | #18337 | reply | quote

What do you now think of these scenarios? Got some solutions re them being potential conflicts of interest?

- We both want the same diamond.

- We both want the same computer.

- We both want to marry the same woman.

- We both want the same slot on the manned mission to the moon.

- We both want to be President (of the same country).

- We both want to be the top commander of the army.

- I want to speak my mind but you don’t like what I have to say and would prefer I shut up.

- I want to kiss you but you don’t want to kiss me.

- I sell printers and you sell printers and we’re competing for customers.


curi at 8:58 PM on October 15, 2020 | #18338 | reply | quote

try working on a discussion tree re conflicts of interest. you don’t have to include everything. you can pick important parts or paraphrase stuff if you want. or go through and do the entire discussion text. it’s up to you what you think would be useful.


curi at 8:59 PM on October 15, 2020 | #18339 | reply | quote

Initial answers to some conflicts of interest questions (TM#49)

also posted to: https://xertrov.github.io/fi/posts/2020-10-18-notes-on-conflicts-of-interest/

Can all these be resolved?

> - We both want the same diamond.

> - We both want the same computer.

> - We both want to marry the same woman.

> - We both want the same slot on the manned mission to the moon.

> - We both want to be President (of the same country).

> - We both want to be the top commander of the army.

> - I want to speak my mind but you don’t like what I have to say and would prefer I shut up.

> - I want to kiss you but you don’t want to kiss me.

> - I sell printers and you sell printers and we’re competing for customers.

## principle

Conflicts of interest (CoIs) seem to exist sometimes. When considering rational ppl or trying-to-be-rational ppl, those conflicts don't actually exist--they're illusions which can be resolved. They look like conflicts because we're ignoring the bigger picture. Ppl involved in the CoI shouldn't want to 'win' via a system which use force to get an outcome. They should want a system that's fair and works generally. A system with universality.

Systems which use force or unwritten rules are not preferable to free-market situations b/c they have adverse consequences outside of one's control (e.g. violence, 'winners' being decided by something like physical attractiveness or social status, etc). They outcomes -- when decided with alternative systems -- are worse for ppl involved. Reasons include: bad distribution of resources, outcomes being based on perceived problems that a person can't solve (e.g. not handsome enough), harm being done (e.g. violence), etc.

## We both want the same diamond.

Expansion of situation: we are both in a shop buying an engagement ring for our respective soon-to-be fiancées, and want the same diamond (diamond-A).

1. The initial 'solution' is that the shop sells diamond-A to whomever asks for it first. Person-A gets it. This is okay because both ppl can agree to a first-come-first-serve model (which is typical and expected).

2. Maybe person-B *really* wants the diamond. They can offer to buy it from person-A. This is okay because it's consensual trade where both ppl are better off.

3. Say person-A says they want to buy it but hasn't paid, but person-B has the cash now. The shop could work on a first-come-first-serve basis where the transaction is the important moment (who can pay first), so person-B gets it. this is an agreeable system.

4. Maybe there is another diamond (diamond-B) that one of the ppl is happy with, so person-A gets diamond-A, person-B gets diamond-B.

in each case an alternative system of distribution (based on attractive looks, or social status, or bribes, or whatever) is not preferable -- it's a worse society to live in.

## We both want the same computer.

Say it's a rare old computer so there's only one of them and it's not fungible. We can agree on a system which is fair, like an auction, and proceed on that basis.

## We both want to marry the same woman.

She should choose who she wants to be with (if either of us). We shouldn't want to be with someone who doesn't want to be with us (that would be bad for both me and her). We should both want her to be able to consider both of us. If I had an advantage (e.g. knew her earlier) and tried to stop her meeting you b/c I thought she'd prefer you, then it means I have to keep that effort up WRT you and any one else she might meet. So eventually I'd need to be coercive or forceful to do that. Hurting the person you want to marry is a shit thing to do (and a bad way to live long term), so I shouldn't want to prevent her evaluating other potential partners. I should actually be in favour of that because it means problems are apparent sooner rather than later. Living in a relationship where big problems *will* occur and that can't be resolved (e.g. she changes her mind about wanting to marry me) is bad for me, so if there will be problems I should want to know about them as soon as possible.

## We both want the same slot on the manned mission to the moon.

Say there are 3 crew slots and 2 crew members have been decided and are better candidates than us (at least for those slots, like the other crew have skills we don't).

### notes on alternatives to free-market / merit based judgement

- We shouldn't want to be chosen if that would jeopardise the mission -- it being successful is more important. we can agree that the most qualified person should be chosen, or the person otherwise chosen s.t. the mission has the greatest chance of success. Maybe we're equally qualified, though.

- We don't want a system where one of us is harmed (e.g. I hurt your family to keep you out of the mission). If I wanted that it could mean my family (or me) is hurt, which I don't want.

- We don't want the mission to be jeopardised for political reasons (or other parochial stuff), so we should be in favour of a selection criteria which is publicly and politically defensible (and just).

- We don't want a system where one of us is prevented from doing stuff in the future like other moon missions.

- We don't want a system where NASA (or whomever) regrets their decision (e.g. because it was made via nepotism or whatever).

- We don't want a system where we hate each-other because that could mean we can't be on the same future mission or otherwise end up excluded from other stuff.

### solutions?

- We can agree on a system based on merit

- We can agree on a system where NASA maintain a suitable body of astronauts (like a minimum number of astronauts kept in reserve), so some rotation is necessary (maybe one of us went on the last mission so the other should go on this one)

-- We can also agree on a system which takes into account future rotations, e.g. flip a coin and one of us goes on this one, and the other goes on the next mission

- We can agree on a system that doesn't bias one of us for external reasons like social status (if that happened, all missions would be worse off and have a lower chance of success)

operating under these sorts of systems is preferable to winning the slot under a different system. if it was some different system then how could we be confident that our crew is the best crew possible?

## We both want to be President (of the same country).

Note: curi and I sort of started discussing this at the end of *Tutoring Max #49*.

We should both be in favour of a good system for selecting a president. We can agree on important features such a system should have, like not favouring one of us. We should want a system where the victory conditions are clear and compatible with our values. We should want a system where we could lose b/c it's possible the other person is a better choice regardless of what we believe.

The conflict only exists when we have bad, irrational systems for choosing a president. If the system is bad then we can both agree changing the system is more important (and subsequently find a system which satisfies both our goals).

If there are other candidates, we should prefer those candidates who will institute a better system to those who won't. If there are perverse mechanics in the selection system (e.g. like those in first-past-the-post when you have 2 similar candidates running s.t. it *decreases* the chance of a favourable outcome) then we should both be in favour of cooperating to maximise the chance of one of us winning over bad candidates. We can find such a system.

We could also run a pre-election or something to decide which of us runs in the main election (similar to primaries in USA).

## mid exercise reflection

I worry that I'm missing something. Are these adequate answers? Do any of the apparent conflicts persist after what I've written?

I think these are hard problems to write about -- in some ways -- b/c there are always unknown and unspecified details which could be chosen to make the situation a 'nightmare situation' (as curi put it in TM#49).

Going to have a think and maybe come back to this later.


Max at 9:25 PM on October 17, 2020 | #18349 | reply | quote

Some thoughts on good/bad error msgs. I think they're important. I found a surprise overlap with *helping the best ppl or helping the masses*

context: I'm thinking about error feedback and how it affects groups of people / group efforts in a general sense, and I'm also thinking about the sorts of error msgs programmers get in the regular course of programming and how it specifically helps/hurts software projects. The thoughts below are a mix.

* are error messages a good way to organise issues? (e.g. in software dev).

* they have an important role: they guide ppl who know less (than the developers, or other community members, etc).

* if error msgs were a bad way to organise issues then there must be a better alternative system. what would such a system be like relatively speaking?

- it would put more burden on the ppl affected by errors b/c it's harder to know/learn how to report and solve the errors

- it would mean responsibility for the quality of error reporting would be shifted towards the shoulders of newbies

- such an alternative system would treat the relevant (preventable!) errors less seriously

* why could that be good?

- it'd mean there was a higher bar for engaging with top tier ppl

- it filters out ppl who are not able to understand the problem at least enough to figure out how to begin to deal with it

- if the best ppl don't know how to prevent relevant errors then isn't it better for them to focus on solving those problems rather than helping ppl who aren't as valuable?

* why could it be bad?

- higher bar to error correction -> less error correction

- easy to discourage ppl and end up reinforcing static memes / driving ppl away

- if the best ppl didn't know how to prevent the relevant errors then they end up working on the problem anyway; makes sense that there's an equilibrium here; after all, ppl are voluntarily participating on both sides.

relevant to:

- helping the best ppl or helping the masses

- error msgs and ~responsibility of senior members

- there is no one constant set of behaviours that makes sense WRT helping the best ppl vs masses, what matters is context. is it a good time to help one or the other? if lots of ppl have really bad ideas then it's probably worth helping the best ppl -- so we can find a good soln to that problem. conversely, if we don't have any great ppl at that time, or are otherwise short of *great* opportunities, then there's more utility helping the masses. there needs to be fertiliser for future generations, but also nourishment for current ppl in their prime. those *great* opportunities can be vicarious, ofc. Man's first journey to the Moon was a journey shared by a Nation.

- there's a big question raised by this: how should we react to *learning* of a great opportunity?

Finishing up: what happens if someone goes to an effort to make error msgs as good as possible?

- organisation gets better b/c the error messages are better suited to the associated errors

- it gets easier for ppl to help with / do error correction b/c the msgs/explanations match the contextually best ideas more closely and are more reliable to reason about.

- exponential/geometric increase in effectiveness of relevant key ppl -- their time can be better allocated, delegation gets easier, etc

- mutually beneficial for all parties. (note: this relies on the ability to improve error msgs and the right ~economic context to make it the easy choice. OTH I think that's reasonably common. most non-optimum situations don't hurt much and can be easily controlled via the 2nd-derivative (~acceleration). if there's a bit too much work on good error msgs then you can just reduce the hours per week by 10%; it can be gentle without much harm. the harm I mention here is wasted resources in a generic sense.)

## clarifying stuff

I didn't put a huge amount of thought into particular word choices because they felt difficult and I didn't want to ruin the flow. Here are some clarifications:

- *responsibility* as in *~responsibility of senior members*: i don't mean anything like an obligation, but if there was a clear moral decision then it'd line up with that.

- *2nd-derivative (~acceleration)*: controlling the rate-of-rate-of-change is useful if you want to control the outcomes of some (simple enough) system, and acceleration is a reasonably common way of talking about that.


Max at 3:37 AM on October 20, 2020 | #18365 | reply | quote

I edited the OP to add the Max tutoring playlist link https://www.youtube.com/playlist?list=PLKx6lO5RmaetREa9-jt2T-qX9XO2SD0l2


curi at 5:38 PM on October 22, 2020 | #18396 | reply | quote

some thoughts on project planning - Max

Context: project planning differences between doing projects yourself vs with a team

When I do projects myself, I work with a pseudo-JIT planning method. For the most part, the way I do prioritisation is based on immediate dependencies. I can also change focus with low overhead (like work on UI a bit, then backend, then UI, then backend)

Team projects are different. A lot more of the dependency graph needs to be defined upfront. There's a large overhead in transferring knowledge and changing who's working on what.

Does this difference matter for project planning? I suspect the methods to avoid fooling oneself mean the outcome is fairly similar. Like my naive method of JIT prioritisation leaves a lot of room to work on things that aren't that important -- to fool oneself.


Max at 2:39 AM on October 26, 2020 | #18505 | reply | quote

(This is an unmoderated discussion forum. Discussion info.)