We review a lot of AI tools — and to do that well, we follow a small, carefully chosen set of voices who consistently publish accurate, useful, and honest analysis. This isn't an exhaustive directory. It's the list we'd send to a friend who wanted to stay genuinely informed without drowning in noise.
📺 YouTube Channels
For seeing tools in action before you commit to signing up, YouTube is unbeatable. These four channels collectively cover the full spectrum — from hands-on tool demos to deep technical explanations.
Matt Wolfe (@mreflow)
Matt is the most-followed independent AI tools reviewer on YouTube, with weekly roundups that cover every significant new release. His format is consistent, his opinions are honest, and his ability to publish quickly without sacrificing depth is genuinely impressive. If a new tool launched this week, Matt has almost certainly already tested it on camera.
Best for: Anyone who wants to stay current on new AI tools without spending hours reading release notes.
Wes Roth
Where Matt covers breadth, Wes Roth covers depth. His videos tend to be longer, more analytical breakdowns of what new AI models can actually do — comparing outputs, probing edge cases, and thinking out loud about implications. His coverage of image generation and video AI is particularly strong, going well beyond "here's the prompt, here's the result."
Best for: People who want to understand why a model behaves the way it does, not just what it produces.
Andrej Karpathy
Former Tesla AI director and OpenAI co-founder, Karpathy is one of the rare researchers who can explain neural networks at a deep technical level and still be watchable by non-experts. His videos are infrequent but important — when he posts, the AI community pays close attention. His explanations of how large language models actually work are the clearest available anywhere.
Best for: Curious non-engineers who want to understand what's actually happening under the hood of the tools they use.
Two Minute Papers
Academic AI research moves faster than almost anyone can track — Two Minute Papers bridges the gap. Each episode distills a recent research paper into an accessible, often genuinely exciting summary. It's not always about tools you can use today, but it gives you a reliable window into what capabilities are coming in the next 12–18 months.
Best for: People who want to understand the research frontier without having to wade through academic papers.
🎧 Podcasts
The best AI podcasts fall into three categories: long-form interviews with the people building this technology, fast-moving news analysis, and practical builder-focused discussions. We've picked at least one from each.
Lex Fridman Podcast
Lex Fridman's long-form conversations with AI researchers, founders, and builders are a genuine primary source. His interviews with Sam Altman, Elon Musk, and researchers from every major lab have produced some of the most candid on-record statements in the field. Episodes run 2–4 hours, which sounds daunting — but you can listen selectively to the interviews most relevant to your interests.
Best for: Getting direct, unfiltered perspectives from the people at the center of AI development.
Hard Fork (NYT)
Kevin Roose and Casey Newton are among the sharpest tech journalists working today, and Hard Fork is the podcast that covers AI's messy collision with society, business, and culture. Their weekly episodes cut through both hype and doom with smart, grounded analysis. If you want to understand AI's impact beyond the tools themselves — on jobs, media, politics, and daily life — this is essential listening.
Best for: Understanding AI's broader societal and business implications, not just the technology itself.
The AI Breakdown
Nathaniel Whittemore publishes daily episodes that are concise, well-sourced breakdowns of the most significant AI developments that week. It's the closest thing to a daily briefing for people who want to stay current without following 50 different Twitter/X accounts. The format is tight — typically 10–20 minutes — which makes it easy to maintain the habit.
Best for: Staying current on AI news without dedicating large amounts of time to media consumption.
Lenny's Podcast
Lenny Rachitsky's podcast is primarily about product management and building great products — but in 2025 and 2026 it has increasingly become a go-to resource for how builders, founders, and PMs are actually integrating AI into their work. The guests are practitioners, not theorists, and the conversations are grounded in what's working right now in real products.
Best for: Founders, product managers, and builders who want practical AI adoption advice from peers.
All-In Podcast
Four tech investors — Chamath Palihapitiya, Jason Calacanis, David Sacks, and David Friedberg — talk through the week's biggest stories in tech, business, and policy. Their AI discussions are consistently illuminating, particularly on the investment, competitive dynamics, and business model questions that purely technical coverage often misses. They disagree with each other frequently, which makes for better analysis.
Best for: Understanding the business and investment dimensions of AI — where the money is going and why.
📧 Newsletters
Newsletters remain one of the best ways to get curated signal delivered directly to you. The ones below have all earned subscriber bases in the hundreds of thousands precisely because they consistently deliver value.
The Neuron
With over 400,000 subscribers, The Neuron has become one of the most-read daily AI news digests available. It's formatted for busy people — top stories, quick summaries, and links to go deeper if you want to. The curation is genuinely good: they filter for what's actually significant rather than just what's trending. We check it most mornings when writing our reviews.
Best for: Anyone who wants a reliable daily AI news digest that respects their time.
Ben's Bites
Ben Tossell's newsletter has become the go-to resource for discovering new AI tools before they hit mainstream coverage. The curation skews toward builders and product people — there's a strong emphasis on tools that are genuinely usable, not just interesting in theory. If you want to find out about new tools the week they launch rather than months later, Ben's Bites is where you'll hear about them first.
Best for: Builders, creators, and early adopters who want to discover new AI tools as soon as they launch.
TLDR AI
TLDR AI is for readers who want technical depth but not academic density. It covers new model releases, research papers, and significant product announcements with enough detail to be substantive but without requiring a computer science background to follow. It's one of the few AI newsletters that covers both the tools landscape and the research frontier in the same issue.
Best for: Technical readers who want to track both new tools and new research without reading the actual papers.
One Useful Thing by Ethan Mollick
Ethan Mollick's Substack is in a category of its own. As a Wharton professor who has spent years studying how AI actually changes the way people work and learn, Mollick writes practical, evidence-backed guides on using AI effectively — not "10 prompts to try," but genuine thinking about how AI changes expertise, creativity, and learning. It publishes less frequently than the other newsletters here, but every issue is worth reading carefully.
Best for: People who want thoughtful, research-grounded perspectives on how to actually integrate AI into professional and intellectual work.
🎓 Academics & Researchers Worth Following
A small number of researchers write accessibly enough — and honestly enough — to be genuinely useful for non-specialist readers. These three are the ones we return to most often.
Ethan Mollick — Wharton School
Mollick is the clearest practical writer on AI for non-researchers, full stop. His work focuses on how AI changes learning, creativity, and productivity in real organizations — grounded in experiments and evidence rather than speculation. His X/Twitter posts and Substack pieces consistently contain ideas you'll still be thinking about days later.
Best for: Educators, managers, and knowledge workers who want evidence-based guidance on working with AI.
Andrej Karpathy
Karpathy doesn't write frequently, but when he does — on X, in blog posts, or in lecture form — the quality is exceptional. His explanations of how neural networks learn, how LLMs actually process language, and what current AI systems genuinely can and cannot do are the most trustworthy technical explainers available to a general audience. His "Neural Networks: Zero to Hero" lecture series is the best free resource for understanding modern AI from first principles.
Best for: People who want to genuinely understand how the technology works, not just use the tools.
Yann LeCun — Meta AI
LeCun is one of the founding figures of modern deep learning and now heads AI research at Meta. He's also one of the most prominent critical voices in AI — deeply skeptical of both AGI timelines and the current dominance of transformer-based LLMs. Following him provides a valuable counterpoint to the hype. You may not always agree with him, but his arguments are rigorous and force clearer thinking about what AI currently does and does not do.
Best for: Anyone who wants to stress-test their assumptions about AI and hear from someone with the credentials and willingness to push back on consensus views.
📄 Official Blogs & Benchmarks
When we want to verify a claim about a model's capabilities, pricing, or safety approach, we go to the source. These are the primary sources we consult.
OpenAI Blog
OpenAI's blog is where major model releases, capability announcements, and safety research are published first. The writing quality has improved significantly — posts now typically include meaningful technical detail alongside the product announcements. For any claim about GPT-4o, GPT-5, or OpenAI's broader research direction, this is the primary source.
Best for: Verifying capabilities, pricing, and policies for OpenAI products directly from the source.
Anthropic Blog
Anthropic publishes some of the most thoughtful technical and safety-focused writing in the industry. Beyond product announcements for Claude, the research posts on Constitutional AI, model interpretability, and AI safety are worth reading for anyone interested in how the field's safety-focused lab approaches these problems. The model cards and system prompt documentation are also unusually detailed.
Best for: Understanding Claude's capabilities and Anthropic's safety research in depth.
Google DeepMind Blog
DeepMind remains one of the most productive AI research organizations in the world, and its blog covers both product releases (Gemini, AlphaFold, Veo) and the underlying research. The coverage of multimodal AI, protein folding, and scientific applications is especially strong — areas that get less coverage elsewhere. If you want to track what Google is building at the research level, this is the place.
Best for: Tracking Google's AI research and the Gemini model family from the source.
LMSYS Chatbot Arena
Chatbot Arena is the most credible independent benchmark for large language models. Built on a blind comparison framework — users rate responses without knowing which model produced them — it produces rankings that are harder to game than conventional benchmarks. When a company claims their model is "state of the art," the Arena leaderboard is where we check whether the real-world evidence supports that. It's not perfect, but it's the best we have.
Best for: Getting an evidence-based, hard-to-game ranking of current model quality across tasks.
A note on this list
This page reflects our genuine reading list, not affiliate arrangements. None of the creators or organizations listed here have paid for inclusion. We update this page when our own habits change — if we stop finding a source consistently valuable, we'll remove it. If you think we're missing something important, let us know.