Co-Intelligence by Ethan Mollick - some quick thoughts
•2 min read•By Jake Butler
- AI in different roles → he kind of preceded the agentic AI thinking. It's helpful to think of AI agents this way, and to optimize them by having different rules. I think this is going to continue with smaller, more specially-trained models for the particular role/task they have (instead of a “God” model. See post → link to and add commentary about this post.
- “Alien Minds” → should it be Alien coding instead of vibe-coding? Alien-assisted coding?
- Very practical guide for how you can more broadly think of AI and its role with humanity. I think it's still helpful to think of it as an assistant requiring guidance and feedback, rather than fully autonomous and independent. This guides a lot of my personal philosophy around AI-development and including humans in the loop as the simplest, most effective way of improving quality, security, and reliability, as well as buy-in from your users.
- Assum this is the worse AI you'll ever use → so true. It's impossible to stay up with the rate of change. Sidenote: I read something that helped me manage my information intake overload → think of it more as a stream you can dip in and out of, rather than a bucket needing to be emptied. The more recent info will have a recency bias and you'll be more likely to see those, but you can still skim to historical items as you wish.
No comments yet. Be the first to share your thoughts!
Leave a Comment
Comments are moderated and will appear after approval.
Enjoyed this article?
Get more insights like this delivered to your inbox. Join other professionals staying ahead in AI, product management, and innovation.
We respect your privacy. Unsubscribe at any time.