I was working on a custom element today that replaces a textarea with CodeMirror in the UI while still updating the textarea in the background so that it can be submitted in a form. I ran across a wild footgun in custom elements.
Semantic versioning is difficult. Not everyone has the same idea of what “breaking change” means, and depending on your language and tooling “breaking” changes can sneak in despite your best efforts.
One possible mitigation is to record when you resolved your dependencies and ignore anything published after that date. It's crude, and relies on package dependencies not monkeying with stuff that's already been published, but it's easy to understand and you can do it with uv.
This use case seems particularly valuable: if I write a quick script I probably care more about it not breaking than I care about it getting updated dependencies. It's nice that uv offers a way to keep code running rather than forcing me to update it or throw it out.
The talk itself was very entertaining, with musical interludes mashing up “the Beatles” and “Craigslist ads for vehicles”, and while the algorithms were (by Jamie's own admission) pretty straightforward, there was a lot of room for expression in finding good corpuses, mashing them up, and finding fun ways to apply them.
It was an excellent reminder to me that there's an entire world of stuff you can do with computers that isn't commercial, isn't “hard tech” or “cutting edge” but is nevertheless utterly delightful.
I had to update a Django service to use S3 instead of local file storage recently. Here's how I set up Minio (a self-hostable S3-compatible data store) for local development.
I was checking out the Model Context Protocol, a spec from Anthropic that lets you expose external programs to an LLM, and discovered that you can extend the Claude desktop app in this way. There are a bunch of enterprisey tools (and you can write your own, more interesting stuff), but out-of-the box you can enable URL-fetching, which lets you grab and process arbitrary information with Claude.
I just released version 0.1 of a plugin for Simon Willison's llm called llm-questioncache. It lets you send questions to your default LLM with a system prompt that elicits short, to-the-point answers. It also maintains a cache of answers locally so that you only have to hit the LLM once for each bit of esoteric knowledge.