[Weekly Roundup] - Dec 28th
A lot of folks ask me about how I manage to stay on top of all the stuff happening with AI. The simple answer is that I don’t. I mostly just follow my intrinsic curiosity, and that’s been a consistently useful thing to do.
However, that doesn’t imply that it’s totally chaotic and goal-less. For more than a decade, I’ve been following a practice that has only compounded over time.
Essentially, I always attempt to form an integrated model of the world. My model should be able to explain everything from deeply personal and existential questions (e.g. “What sorts of stimuli arouse feelings of happiness within me?”) to mundane business questions (“What novel capabilities do I anticipate that $FRONTIER_LAB will likely launch in ~3-6mo?”). And so when I learn something in one domain (e.g. machine learning), I immediately try to understand how I can apply it to another domain. In this way, I’m always seeking to construct an extremely holistic description of the world. In particular, I pay really close attention to when my mental model is wrong, and more specifically why I think it was wrong. This can be emotionally very challenging. But it’s been what’s been most valuable for me. For my ML readers, think of it as “hard negative mining”. When you’re training a model and data is scarce, would you rather invest your energy in sourcing easy examples or hard examples?
Anyway, that is a digression. A bunch of folks have asked me for some sort of updates on what I’m reading and why. I’ll attempt to periodically post interesting URLs on this substack. I’ll make sure to start the post titles with “[Weekly Roundup]” in case you want to filter them out in your email systems.
Mailing list with AI news
This is an excellent mailing list that summarizes things across reddit, twitter, etc. I’d highly recommend subscribing to it!
Using o1+Cursor effectively to rapidly develop webapps
I’ve loved following this guy’s X account, as well as the Build in Public community on X. I’ve learned so much about prompting from watching all these people creatively use LLMs to improve their workflow.
https://x.com/sebastianvolkis/status/1871718932756750401?s=46
A really great video on how you can put together
Video Generation
It’s kind of mind-blowing how much deep learning has changed in the last 10 years! This generated video of a cat cutting some lettuce blew me away!
Building with AI
https://x.com/leeerob/status/1869755457029476482?s=46
I really loved this thread on some AI-first UX patterns. For my readers working at frontier labs, please check this stuff out! Especially if you work on post-training! The rate at which developers can experiment with such experiences is bottlenecked on the models and their ability to follow increasingly more sophisticated and diverse instructions to generate code, prose, etc. Ditto for in-context factuality.
Cognitive biases
I really liked this post by Amanda Askell. She’s a moral philosopher at Anthropic who thinks about Claude’s personality. As I read this post, it reminded me about another book I recently read called The Courage to be Disliked by Ichiro Kishimi and Fumitake Koga. I’m not sure if I’m entirely on board with what that book as to say about the hard line w.r.t etiology vs teleology. But I think it’s a very helpful book w.r.t offering a provocation about what our conscious or unconscious “goals” are when we react to the various stimuli in our lives. That seems closely related to this post by Amanda, which provokes us to see past the “identities” that we lazily construct for other people that elicit emotional reactions from us.
Dzogchen
https://medium.com/@vezhnick/heart-sutra-and-the-nyams-of-dzogchen-1da41e080751
This is an excellent post on some of the different meditations that are part of the Dzogchen path. In the last few months, I’ve started becoming more and more curious about Dzogchen. I’ve really loved the Evolving Ground community, and A Trackless Path by Ken McLeod totally changed the nature of my spiritual practice.