15 Comments
User's avatar
Tobias Leenaert's avatar

congrats on this very well written, interesting and useful article. Before starting, I'd never thought I'd finish it, but very glad I did. Only time i got sidetracked was when i took your advice and started a convo with Claudi on sentience, which took longer than expected :)

Aidan Kankyoku's avatar

That’s exactly the reaction I was hoping for, and you’re exactly the kind of person I was hoping to hear it from!

Mel Brennan's avatar

I think this might be the longest article I've ever read, and while I'm still processing it all, I'm quite certain that my approach to many parts of my work is going to change significantly (and hopefully become more effective), as a result.

Thanks a lot for the time and effort this must have taken to write!

Aidan Kankyoku's avatar

You just made my week, Mel! Even one response like that makes all the time well worth it!

JoA's avatar

Also, I'd encourage you to linkpost this on the EA Forum. While there's a lot of discussion there, as you highlighted, I'm sure many users would be excited to know about this post!

Aidan Kankyoku's avatar

Thanks for the push, I'll share it there!

Sentient Futures's avatar

Included in our newsletter (will be published today) because I respect Joseph's opinion, and he was impressed with this! I appreciate the audio version - have saved it to my podcasts! - Sam

Aidan Kankyoku's avatar

Wooo thanks Sam & team!

Joey Bream's avatar

This is an excellent post. It is also incredibly long. I've written a summary here that I think captures a lot of your message :) https://open.substack.com/pub/joeybream/p/summary-sandcastles-the-tsunami-is

JoA's avatar
Nov 4Edited

After several hundreds of hours discussing this topic and writing about it, I still doubt that "anticipating" will be better for animals in expectation. However, this is might be the first "intro post from first principles" about this, and it goes pretty deep into the topic, so I admit I'm impressed!

Sadly, again, I do think our situation when figuring out how to defend animals in a post-AGI world is very similar to that of a caveman who'd wish to take action during his lifetime in -12,000 BC to effectively improve the treatment of animals in the 21st century. But I think you highlight this quite well, and that further exploration of the topic makes sense on principle.

Aidan Kankyoku's avatar

I've become less optimistic about anticipation-type interventions probably for the same reasons, but not enough that I'm ready to give up thinking about them, though I probably haven't spent as much time sorting out my thoughts. Two important disanalogies with cavemen are 1) we have the situational awareness to know dramatic change is coming, i.e. we are even capable of asking the question, and 2) we have a lot more past data to look at about what OOMs of economic growth look like. I think we shouldn't count ourselves out! We're definitely in an unprecedented position but we can reflect on that position in an unprecedented way.

Billie Groom, PhD(c)'s avatar

Agree. We need to stop relying on the current leading orgs to make change.

Aidan Kankyoku's avatar

I agree that small, agile, and likely new orgs will generally be better positioned to respond to the dizzying changes headed our way!

Jasnah Kholin's avatar

It took me more then week to read all the thing, and yet, here I am!

there is some thing that bug me about the post - the call to use more AI, without any mention of the dangers. it doesn't look like you made trade off, but that you just not aware such trade off exist at all.

some time ago I read post about how the ability to manipulate people using social media feed is a reason to avoid social media. the post was directed on people working on AI specifically.

in the same way, one of the more mundane dangers of AI in no-singularity scenarios, is that people take information from AI, so AI can spread disinformation, and restrict what parts of the world people can see.

you wrote some suggestions to using AI for persuasion that look to me very much black mirror like, both something to be scary of, and definitely Dark Side things to avoid doing myself.

and personally, I'm very ambivalent about using AI. are you sure that the future belong to the AI literate, and not to those who avoid being manipulable by Grok, when certain millionaire decide he want to take over the world?

there is a Missing Mood of fear here. and it may look weird, to say that post about the possible end of the world lack fear, but it's sure look to me you are not afraid of what Claude can do to you.

User's avatar
Comment removed
Nov 5
Comment removed
Aidan Kankyoku's avatar

I agree on the cultural resistance to clean meat point (actually wrote about that back when I was at Pax Fauna: https://paxfauna.org/social-norms-blind-spot/) though I’ve updated to be less worried about cultural resistance given cultured meat will be one of the least weird things I expect to happen in the coming decades. As for narrow windows of opportunity, I also agree there. It’s such a hard balance to strike because over adjusting for a timeline that turns out to be faster than reality could be just as harmful as moving too slowly.