• Scaling Wisdom
  • Posts
  • #7 – (Essay-free) Embodied practices for meaning making, intelligence of decentralized network and other stuff

#7 – (Essay-free) Embodied practices for meaning making, intelligence of decentralized network and other stuff

Hello there, friend!

this week is unfortunately essay-free – I seem to have bitten more than I can chew, so it’s taking me more time to synthesize it to my liking. To at least give you some preview, the Jordan Hall link below is one of the key pieces I’m pondering.

Last week’s dig-ups

Personal metawork

  • Our dopamine system is constantly learning as we pursue our goals – not only the magnitude and likelihood of a reward associated with a given pursuit, but also the delay we should expect between effort and reward. So when we pursue quick gratification, we actively undermine our patience and persistence.

Collective metawork

Entrepreneurship

  • JK Molina on the three key problems of marketing: traffic (getting eyeballs on your offer), the offer itself, and the market (are there actually enough people willing to pay for your offer). His main point is that you never want to commoditize yourself and charge for work done (usually per hour), but rather charge for value delivered.

    • He also admits Alex Hormozi had a big influence on him, so it makes sense their advice is similar

Philosophy & Sense-making

  • Rafe Kelley on how embodied practices (e.g. parkour or martial arts) can help us understand how we relate to the world in the physical realm. This can and should be put into a dialectical relationship with other practices to facilitate the meta-practice of development and meaning-making.

    • FYI, the audio is a bit out of sync with the transcript here, it starts about a minute early

  • If you’re feeling brave, the chief AI-doomsayer Eliezer Yudkowsky was finally on Lex Fridmans’s podcast. As an introduction to AI existential risks, the Bankless episode is better, but this is a good follow-up. Some interesting takeaways for me:

    • Using AI to align AGI is problematic, because we’d essentially be using a weaker system to align a stronger/smarter system, but the smarter system would likely learn to exploit blind spots of the weaker system.

    • The paperclip maximizer thought experiment as it’s usually presented is a watered-down version of the original, basically warning us “careful what you wish for.” This is the problem of outer alignment, i.e. how to make sure that AI doesn’t take an assigned goal too literally. But for that to even become a problem, we first need to solve inner alignment, i.e. how to actually get the AI to do what we want it to do.

  • Jordan Hall on how decentralized networks are much better systems for navigating complexity, but only insofar as their members think for themselves and don’t attempt to distort the signal

    • I first discovered Jordan Hall and other thinkers appearing on Rebel Wisdom about 18 months ago and it was through these conversations that I ended up writing my thesis about self-management. As I’m ramping up my writing, I’m also revisiting these with much delight.

Reflection

  • I spent a ridiculous amount of time on my monthly and quarterly reflections on Sunday. Main lesson learned? Keep it simple, stupid – I absolutely overcomplicated the Notion system I designed for this, without actually getting feedback on how well it works until now. But now it’s dramatically simplified, using simple properties instead of fancy relations, and it seems to work much better. I guess I’ll see in 3 months.

And that’s it! I hope you liked it despite the absence of the essay, I’ll do my best to make next week’s read worth the wait.

Bye for now

Chris