Future of Coding Weekly 2025/08 Week 1
2025-08-03 23:31
๐ฅ Bubble Menu ๐ก Enough AI copilots! We need AI HUDs ๐ฅ Vibe coding a choropleth map in Observable Notebooks 2.0
Two Minute Week
๐จ๏ธ maf: ๐ฅ Bubble Menu ๐ฎ๐ป AUTOMAT DEVLOG 13
๐งต conversation @ 2025-07-30
Some context for the context menu:
๐ฅ Bubble Menu ๐ฎ๐ป AUTOMAT DEVLOG 13
DevLog Together
๐จ๏ธ Ivan Reese:
๐งต conversation @ 2025-07-28
Spent the weekend playing with claude code for my first nontrivial thing.
It's pretty wild. It feels so much like working with a super green junor dev. A lot of the same coaching strategies seem to apply well. Also, I'd forgotten how it's kinda fun to teach good debugging practices!
One major difference โ I tend to hit the context window limit after about an hour, and have to start up a new session (because compaction sucks), which means I (seemingly) need to do a bunch of coaching to make sure claude is always externalizing its "thought" process to files so that the next agent can pick up where it left off. This sucks, or I just haven't figured out a good foolproof way to do it.
The nontrivial thing โ I'm making a new version of my depth camera, but using gaussian splatting. So I'm using this as an opportunity to: * learn claude code * learn gaussian splatting * learn more swift
๐จ๏ธ Eli:
๐งต conversation @ 2025-08-01
My 3rd act reveal is that the future of coding is actually awk
, skwak
Thinking Together
๐จ๏ธ Kartik Agaram:
๐งต conversation @ 2025-08-02
This might be premature, but I think I finally understand Dijkstra's approach to deriving programs from post-conditions in "A Discipline of Programming". I've had this book on my bookshelf for almost 20 years, never understood it but also never quite worked up the will to toss it out. (For context, I only own like a dozen books over the long term.)
Concretely, I've made it to the end of Chapter 7. I feel like I understand every bit up until this point.
Parts of Chapter 6 and 7 feel very sloppily written! And this is Dijkstra! So either my leaps of interpretation are only leaps because I'm missing something, or my sense of understanding is an illusion ๐
Has anyone here made it this far and feel like they understood it? I'd love to talk to you.
Incidentally: I wouldn't have made it in even this my probably 4th attempt, if it wasn't for LLMs. They're better than a rubber duck for talking things over with! It's amazing that they can all converse intelligently about the Dijkstra method, and all I need to do is mention wp or wdec. Or I know nothing and am incapable of judging anything about this book.
๐จ๏ธ Mihai:
๐งต conversation @ 2025-08-02
Seems like the message from the Better Software Conference is that the future of programming should be: simple, low level ( aka fast ), imperative, data-oriented ( not oop ) coding. I kind of like itโฆ Started working in C again for some personal projects, and I enjoy it.
Linking Together
๐จ๏ธ Mariano Guerra: ๐ฅ Vibe coding a choropleth map in Observable Notebooks 2.0
๐งต conversation @ 2025-07-29
Observable Notebooks 2.0 Technology Preview
๐ฅ Vibe coding a choropleth map in Observable Notebooks 2.0
๐จ๏ธ Jasmine Otto:
๐งต conversation @ 2025-07-31
pipeline synthesis via domain-specific visual programming. obviously they're over-claiming ('generate any stable, isolable molecule') as chemists, but as a PL person this sounds great! if you're a constructivist getting into assembly theory, I think this was written for us
Linking this framework to assembly theory strengthens the definition of a molecule by demanding practical synthesizability and error correction becomes a prerequisite for universality.
Making digital chemistry truly universal for chemputation (digital control of chemsitry) required a new approach to hardware, wetware, & software with the development of XDL. https://www.science.org/doi/10.1126/science.aav2211 Now the theory showing universality is done. https://arxiv.org/abs/2408.09171
๐จ๏ธ Ivan Reese:
๐งต conversation @ 2025-08-02
sapphirepunk โ an alternative to cypherpunk, via Christopher Shank
๐จ๏ธ misha:
๐งต conversation @ 2025-08-02
https://www.reddit.com/r/nosyntax links and demos of structural/visual code editors https://github.com/yairchu/awesome-structure-editors/blob/main/README.md
๐จ๏ธ Denny Vrandeฤiฤ: ๐ Gallery of programming UIs
๐งต conversation @ 2025-08-02
That links to the Gallery of programming UIs, but the link there seems dead (and the Internet Archive is no help): https://alarmingdevelopment.org/?p=1068 --- does anyone have a copy?
๐ Gallery of programming UIs
Iโve assembled a gallery of notable/interesting user interfaces for programming, as inspiration for the next Subtext. [Google Slides]
๐จ๏ธ Jack Rusher:
๐งต conversation @ 2025-08-03
https://blog.brownplt.org/2025/08/03/paralegal.html Something for our community of computational law enthusiasts. ๐
AI
๐จ๏ธ Nilesh Trivedi: ๐ก Enough AI copilots! We need AI HUDs
๐งต conversation @ 2025-07-28
๐จ๏ธ Tom Larkworthy: ๐ DSPy
๐งต conversation @ 2025-08-03
Just came across https://dspy.ai/ while researching GEPA. Seems to be a very flexible and programmable "LLMs as code" runtime. Sort of a functional abstraction over LLMs. Its got some very good credentials using it, it allowed things like optimising the prompt.
The framework for programmingโrather than promptingโlanguage models.
๐ GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning
Large language models (LLMs) are increasingly adapted to downstream tasks via reinforcement learning (RL) methods like Group Relative Policy Optimization (GRPO), which often require thousands of rollouts to learn new tasks. We argue that the interpretable nature of language can often provide a much richer learning medium for LLMs, compared with policy gradients derived from sparse, scalar rewards. To test this, we introduce GEPA (Genetic-Pareto), a prompt optimizer that thoroughly incorporates natural language reflection to learn high-level rules from trial and error. Given any AI system containing one or more LLM prompts, GEPA samples system-level trajectories (e.g., reasoning, tool calls, and tool outputs) and reflects on them in natural language to diagnose problems, propose and test prompt updates, and combine complementary lessons from the Pareto frontier of its own attempts. As a result of GEPA's design, it can often turn even just a few rollouts into a large quality gain. Across four tasks, GEPA outperforms GRPO by 10% on average and by up to 20%, while using up to 35x fewer rollouts. GEPA also outperforms the leading prompt optimizer, MIPROv2, by over 10% across two LLMs, and demonstrates promising results as an inference-time search strategy for code optimization.
๐จ๐ฝโ๐ป By ๐ @marianoguerra@hachyderm.io ๐ฆ @warianoguerra
๐ฌ Not a member yet? Check the Future of Coding Community
โ๏ธ Not subscribed yet? Subscribe to the Newsletter / Archive / RSS
๐๏ธ Prefer podcasts? check the Future of Coding Podcast