3 stories

ChatGPT Images 2.0 renders legible, human-like text in images, removing a cheap, high-signal forensic cue and forcing a shift from brittle pixel detectors to provable provenance and cryptographic watermarking.
Read story →The timeline: people are gleeful that image LLMs can now render the kind of spoof graphs and absurd page excerpts that used to be hand-drawn memes. The mood is playful and impressed — this feels like a milestone in style and capability — but there's a steady undercurrent reminding everyone these models are not flawless: stubborn editing, compositional glitches, and degraded control temper the hype. Overall: excited amusement + pragmatic caveats.
There's an active, slightly anxious thread about vendor-provided 'thinking' features and whether developers can force or tune model internal deliberation via the API. People are excited when they find settings that work (adaptive thinking, effort overrides), and frustrated when previously available levers seem removed or inconsistent. The emotional tenor: eager experimentation + concern about losing control and reproducibility as platforms A/B test or iterate behind the scenes.
The community is broadly impressed that open-weight models like Kimi 2.6 are shrinking the gap with closed state-of-the-art systems. But enthusiasm is laced with skepticism: benchmark scores look great, yet hands-on usage exposes rough edges (inconsistency, creative limits, editing failures). Conversation centers on where the gap remains (robustness, qualitative judgment, stubborn editing) and how much weight to give benchmarks versus day-to-day experience.