Langages
Remy Sharp
Remy Sharp est un éminent développeur front-end, auteur et conférencier qui est bien connu pour son expertise en JavaScript, HTML, CSS et les technologies connexes. Il a contribué à de nombreux projets open source et est l'auteur de plusieurs livres de développement Web populaires. En tant que conférencier, Remy a donné des conférences lors de conférences à travers le monde, partageant ses connaissances et ses idées avec d'autres développeurs.
Check the latest blog posts of Remy Sharp below
JS Bin down in 2026
02 février 2026
January 27th I got an email notification saying that JS Bin had become unavailable. Then next day real life human beings were asking what's going on. By 11pm on the 30th the last of the issues were resolved. Earlier today Jake asked me: what went wrong? Fucking, everything.
Bytes I can delete after all this time
13 janvier 2026
For the last few years my work-work has mostly focused on back end software (particularly around APIs). This meant that any front end work I was doing was for myself. Being an long-in-the-tooth old dog, I tend to learn and trick, and roll it out again and again typically without taking the time to find whether I still need the trick. Case and point, I learnt about the JavaScript performance trick of ~~1.4 === 1 to floor a value (and the same float | 0) but really these days it's not "faster" than doing it the legible way (i.e. Math.floor(1.4)). Given I've had a bit of time away from the backend, here's an unorganised list of things I've found I can use, and thusly remove extra code that I no longer need.
Books I read in 2025
01 janvier 2026
This post is mostly data driven (from my own web site's data) to give me a sense of the quality of the books I've read, otherwise individual reviews are all linked in this post or available on my books page. Longest book: Butter - 464 pages Shortest book: The Time Machine - 80 pages Quickest read: 3 days - The Radleys by Matt Haig (336 pages) Longest read/slog: 2 months each Butter by Asako Yuzuki (464 pages) Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI by Karen Hao (496 pages) The Left Hand of Darkness by Ursula K. Le Guin (290 pages) Diversity of authors: Women: 9 Men: 5 Rated books 5 stars Minority Rule: Adventures in the Culture War - 319 pages 4 stars Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI - 496 pages In Bloom - 432 pages Sweetpea - 368 pages The Radleys - 336 pages The Man Who Died Twice - 422 pages The Echo Wife - 240 pages Books by decade 1890 1895: The Time Machine by H.G. Wells 1960 1969: The Left Hand of Darkness by Ursula K. Le Guin 2010 2010: The Radleys by Matt Haig 2015: Sweetpea by C.J. Skuse 2018: In Bloom by C.J. Skuse 2019: Reasons to Be Cheerful by Nina Stibbe 2020 2020: Butter by Asako Yuzuki 2020: The Man Who Died Twice by Richard Osman 2021: The Echo Wife by Sarah Gailey 2022: The Satsuma Complex by Bob Mortimer 2022: Spike Milligan: Man of Letters by Spike Milligan 2023: Making a Killing by Cara Hunter 2024: Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI by Karen Hao 2025: Minority Rule: Adventures in the Culture War by Ash Sarkar
My 2025
31 décembre 2025
I've been doing these posts ending my years, aiming to publish on the 31st, so I'm pleased that I've managed to get this post out the door. Mostly for my own reading, but perhaps yours too.
An opportunity to learn: Advent of Code
03 décembre 2025
I've written about Advent of Code in the past, but that was 5 years ago, so this warrants a new post, and there's an extra opportunity, I think.
Handing over to the AI for a day
29 novembre 2025
Context: back in March 2025 I decided to put aside my scepticism and try AI driven development for the day. I appreciate that in 8 months, the AI landscape, particularly around agentic software dev has moved along, and perhaps this should have been posted originally back in March. All the same, maybe this is useful to some degree, if only to capture what it was like in time. Whilst I sit squarely in my AI-sceptic seat, I was recently prompted to try a different tact by two post I read a few weeks back. The first was Bruce's colonoscopy post, yes, that one. It was in fact that he was using a local LLM to create a generative image to commemorate his visit. I'd been using chatgpt (for code and electronics, basically a NLP version of a search engine), and hadn't considered that perhaps I could host myself to take some responsibility for one of the two problematic aspects of LLMs today (first being power consumption, second being the global theft). The second post was Simon Willison's post on how he created a mini tool (the post is much broader than that, a useful read). It wasn't the post so much but the tool he was using: Claude Code. So far with chatgpt I'd copy back and forth errors and tweaks when it was helping me. Although I also have copilot enabled in VS Code it really holds down to autocomplete at a pretty junior level. I'll often have typescript errors (for work) that copilot claims it can help fix, only resulting in even more typescript errors - so as a rule, I tend to avoid generating code fixes with copilot. But what Simon showed with a shared transcript between himself and Claude Code was the software making the changes and offering diffs. So that was what prompted a mini journey, and here's how it went. Offline LLMs Previously I had installed command line tools and even Ollama on my Mac without having the faintest idea how to use them effectively - so they sat idle and unused. I'm not sure how, but I came across Msty, a tool that purported to make using local LLMs very easy. For a change, that seems to be true. Since I still had Ollama running (though I should have probably ejected it), Msty quickly linked up to this and discovered (though I'd forgotten) that the DeepSeek R1 model was loaded already. I figured since I was primarily using LLMs for software development, it made sense to find a model that suited those requirements. Not quite that simple, or certainly not if you (ie. me) doesn't have the LLM lingo and terminology down. I did some searching online, found a few articles, which themselves read as if they had been generated by AI, offering feature reviews of LLM models, but I couldn't find anything that said "model X is excellent for Y software development". Additionally, I found some models on huggingface.io, and figured that 7B was going to be easier/more viable to run than a 70B (big numbers need big machines? I was guessing…). However putting the name of the model into Myst yielded a lot of results and I couldn't really tell you what the difference was. More importantly, I couldn't quite be sure of the provenance (and whether that even matters…I'd assume it did). I ended up installing the following models: llama 3.2 (2gb) deepseek coder v2 (which blew up) codellama (3.8gb) codellama 13b WizardCoder (I think the 3gb one?) (it just didn't reply on the query) Wavecoder-ultra-6.7b (literally never worked for me) I tried a number of technical problems without much success and did capture the output for a number of these models to try to solve a Home Assistant automation process (which you can see here). It's entirely likely there's better models out there for what I wanted to do. I did know I didn't need a model to know everything, so some focus (and thus: reduced size) would make sense. However, not one of these gave useful replies to my questions. My favourite (fail) was giving code-llama a question that read: this query gets all the lessons matching these lesson slugs (I gave it the actual SQL query), how do I get a list of all the slugs that didn't match?. The LLM happily reported that I was trying to learn Spanish: The provided code contains a list of sentences that are written in Spanish, and the goal is to translate them into English. However, there are some errors in the provided code that need to be fixed before it can function properly. Here are some issues with the current code: Missing curly braces around the translate function definition: The translate function should be defined as a separate named function expression, like this: { translate } = require("google-translate");. This will ensure that the function is properly closed and can be used in other parts of the code. For some reason it was using the slugs in the query as the most important part and completely ignoring the technical query. For the most part, the responses I got were fairly hand wavy, text heavy (which I didn't want since I was asking about code), and in most cases irrelevant to my task. I think em0ry42 on BlueSky sums up what I was seeing: Smaller self hosted models will always under perform the larger ones. I think your experience and those of the other commenters are consistent with the current reality of these technologies. They can't do what people are promising. Yet. I'm sure there are people who can tune the hell out of their setup, but sadly, running any decent LLM locally as a useful code assistant, is just not here for the rest of us. So I parked that for a while, and turned my attention to Claude Code. Coding without touching code I've no clue how new Claude Code was at the time, though I've gathered it's fairly new. It's a solid product from my experience (where I even managed to lose track as to which company owns which weirdly named AI thingy). Setup and interface is entirely on the command line, so already we're speaking my language. I'd seen demos of developers who've been able to join up their entire codebase to the LLM but each time I'd dabbling, I would quickly get lost and give up. Claude Code does exactly this without the walls I'd experienced in the past. I am however, acutely wary that Claude is running on a remote machine, and likely to be chonking through so much power that we're just throwing away water to keep machines from burning up. Let's stick a pin in that (and gosh, I loath myself already for that). The very first problem I wanted it to solve was where I was trying to download 1,000s of videos and they all needed to be added to one massive tarball (context: this is for work, to allow users to bulk download our assets). I'd hit a problem where the tar process kept throwing an unhelpful exception the evening before and no amount of documentation on the library I was using helped me. Overnight I had a suspicion as to the cause and it gave me an idea to try - but I thought I'd let Claude try first, see what it does. Without any specific direction (ie. my idea for the fix), and only the name of the file and the function the problem happened, Claude Code suggested the same solution I had in my head. The UI then offered a syntax highlighted diff of the change it wanted to commit to disk. I was able to review it (very much how I'd approach a code review) and all I then needed to do was hit enter to accept. I tested the code in a separate terminal and indeed the change worked. Given this positive start, I then spent most of the working time split between the Claude Code UI and in the terminal to run the main program (which was sequencing a very large dataset). The code changes for the most part were always good and code that I accepted. The experience was…weird. I'd heard of LLMs being referred to as junior developers but when I was going back and forth between chatgpt and vscode (again, for me, copilot never really came in useful), because of the amount of interaction that was required from me, it felt even less than working with a junior. But this was a much closer experience. I'd describe the change and logic, sometimes pointing to filenames that would offer useful context, and Claude would spend some time (and money) thinking, then it would ask me to check a diff. Weirdly I spent more time sitting and staring out the window waiting for code to come back than I did looking at code. It was a weirdly hands off experience. I can't tell where I sit on that. The main criticism that I had is that, because we use specific rules for typescript (no any and types are defined, which I think seems okay), Claude wouldn't really follow those strict rules, so I needed to go in at the end to clean that part up. The secondary criticism is more a matter of taste. The code (and logging) was verbose to my taste. Additionally, being outside the code for the majority of the work period felt really strange. Sort of like a self-driving car took me the majority of my journey, deciding itself the navigation, for me only to be needed for the final arrival through some tight country lanes. Or something! A cost Since I was freestyling my way on Claude Code, I did manage to rattle through $5 of credit. I did think this was (somehow) linked to my Google business account, but I'm now suspecting it was free credit to introduce me to their API. After running through this credit now twice (I switched to my personal account for a second run) I've discovered there's tools to help manage that sprawling cost (such as /compact and /clear to reduce how much context the LLM is fed before giving me a result). I'd like to play with this more to get an idea of how much I'm really prepared to pay. Also after writing (most of) this post, I came across an interesting project that takes the Claude UI and lets you connect up your own backend. I've not tried it yet, but I'd be interested to see if I can connect to a local LLM and try out results (though going by the current experience, it's going to have a hard time competing). Since it was conversational... I decided to hack together a simple keyboard keycap (I had spare) with an ESP32 board to emulate a keyboard. Then this would send a (fairly) unique keycode that then launched a python command which started a whisper based script that let me talk, then pasted the text into whatever was focused. This meant I had: press button, say the thing, press the button, wait for it to be done. It wasn't great because it was a little clunky, but it definitely felt futuristic! How I felt afterwards (I'm now writing this 8 months late, but I remember how it felt on the day). Even though I was surprised at the progress of the work, both for how terrible the local code solving was and how impressed I was with Claude Code - it did leave me with a feeling of disconnect. There's certainly the issue with the maintainability of pure vibe-coded software, but this was something more. There's a creative input that I put into my coding process. A sense of purpose and achievement in solving some complicated problem, or writing a line of code that I'm particularly pleased with. There wasn't really of that feeling of connection with the output. Having written this retrospectively I know that my perspective has changed somewhat, but I do remember have this weird dissonance between the outcome and the experience of getting there.
FFConf 2025
22 novembre 2025
I've been wanting to write and share my experience of this year's event but a number of things have slowed me down - not least of all that it was Julie's birthday the following Tuesday (the first year her birthday was entirely swallowed by the event). So now as I sit writing this a full eight days later - sat on the side of the swimming pool as many kids, including my own, do their swimming lessons - I'm trying to collect my thoughts on the day.
Syntax Highlighting in Web Component Templates
12 novembre 2025
A simple but effective fix to working with web components and VS Code. I wanted to get syntax highlighting and prettier support (to auto fix indenting, quotes, etc) in my component's templates. The extremely quick read is, add /* HTML */ to the front of the template. Case sensitive and space sensitive (though hopefully one day it won't be so strict). Now highlighting and prettier (with save and fix) works. Note that you need the es6-string-html VS Code extension for this to highlight correct (something I had forgotten I had installed).
How has FFConf changed since it was known as Full Frontal?
27 octobre 2025
Besides the name, the entire core foundation has changed. This question came up recently by someone who had either attended back in the early days or that knew of our event, but they (understandably) saw us as it was in 2009. I think the vibes you came away with in the 2009 and 2010 events would be recognised, but content and the core messaging has changed quite significantly.
Signal Pollution
25 septembre 2025
Very recently I was forced to sign up to Meta due to a product purchase (don't at-me!) and I had forgotten what it was like to be part of the algorithms. Our entire family browse the internet (the web and internet) from behind a DNS proxy that blocks a lot of social media including Facebook/Meta/Insta/whatever it's actually called.
Fifteen
30 août 2025
I'd been waiting for the grief to find me. I wasn't actively looking for it, I know which memories to poke to feel real pain, but I wanted to create space for it to find me, and throughout the month of August, this year, it couldn't find me. Until today. 30th August. This day is the knife edge. The day, 15 years ago, that Tia still had a heartbeat, still kicked, was on her way. Julie was in (long) labour. On this same day, at some point, her heart gave out, she died before she could take her first breath and the midwives had to tell us that they couldn't find that heartbeat any more. Julie was in labour, so Tia was coming. Except that her delivery at 3am on 31st August would be the other side of our lives. The side we live on today: our derailed and rebuilt lives that exist in now.
Getting my highlights & notes from KOReader
22 juillet 2025
It's not an intuitive process and requires a few speciality commands to work, so it made sense that I write up the process so I can duck myself later on.
Vibe coding and Robocop
18 juillet 2025
I've been immersing myself in the AI news for the last few months, trying to get understanding of the landscape, what and why there's excitement, whether the ethics (both from a copyright and climate change perspective) are being considered, and what's the impact of my own usage. In particular lately I've been looking at the idea of vibe coding. I've been playing with Claude Code which, as an experienced developer, is hard not to like (at time of writing!). The short version of what I want to say is: vibe coding seems to live very squarely in the land of prototypes and toys. Promoting software that's been built entirely using this method would be akin to sending a hacked weekend prototype to production and expecting it to be stable.
Unhooking from Amazon ebooks
29 juin 2025
Over the years we, as a family, have been moving our purchases away from Amazon, except in one single place: Kindle ebooks. For me it's that I'm incapable of reading physical books (but my kindle unlocked my reading), and with a Kindle, I was limited as to where I buy my books. When I read that it was relatively easy to jailbreak all the Kindle models I used this as my opportunity to move to buying epub books and hopefully more of that money goes to the authors (in an ideal world…). Here's how it's going so far. In short: not quite as well as I'd like.
AI: did you check your work?
31 mai 2025
There's no denying that the web industry, as with many others, has AI and LLMs as a ubiquitous presence. There's all kinds of different uses for LLMs, and comes with all the ethical concerns - ignored or perhaps at the heart of your use. More recently "vibe coding" has me…wary.
Showing book clippings on my blog
01 mai 2025
After jailbreaking my Kindle and seeing how simple it was and how all existing functionality was retained, I spotted that there was a My Clippings.txt file on the Kindle when mounted (I'm sure it's always there, I just hadn't mounted before). This prompted me to get all my clippings (or as I think of them highlights) onto my blog since I already have all my books.
How I made an LED driver smart…
18 avril 2025
…by just a little. Our family bathroom has a cabinet with an IR sensor that turns on LEDs in the side of the cabinet. The IR sensor is "wave your hand under the cabinet" and the lights go on or off. The littlest uses this light (instead of the overhead which is connected to the extractor fan - i.e. loud) as a night light. Since it's usually left on at night, I wanted to give the LEDs some smarts so when I go to bed, it automatically turns off with my blanket "turn all the lights out" home assistant command.
Do politics belong at web events?
07 avril 2025
This question has been asked before and discussed before and I've always looked on from the sidelines, even though, as a conference organiser, I do in fact have fairly strong opinions about this.
The day piracy changed
22 mars 2025
It certainly wasn't today. It was some time ago, but I wanted to mark this in my blog as a reminder that once, long ago, piracy was, well, stealing. That's all changed now.
Devs: draw your line
08 mars 2025
This post is for my developers out there, web and otherwise. We have super powers. We can make something functional from practically nothing. And you know what they say about great power… So this is short and sweet: know where you draw the line and stick to your god damn guns.