🏡


  1. Life Altering Postgresql Patterns
  2. HYTRADBOI 2025
  3. Actual LLM agents are coming | Vintage Data

  1. March 31, 2025
    1. 🔗 Evan Schwartz Scour - March Update rss

      Hi friends,

      In March, Scour scoured 276,710 posts from 2,872 sources, you gave lots of good suggestions, and we've got some new features that I hope you'll like. We also have quite a few new users, so welcome to everyone who recently signed up!

      Also, I gave the first presentation about Scour and wrote a blog post about it called Building a fast website with the MASH stack in Rust.

      Likes and Dislikes

      The big feature this month is likes and dislikes. Next to each post in your feed you'll find the classic thumbs up and thumbs down buttons, which do what you might expect.

      You can find all of your likes here and you can use that to save posts for later. Likes are also used to help recommend topics you might be interested in.

      (Personally, I've found that I sometimes want to "love" a post, particularly when Scour finds me a gem that I'm especially glad to have found and might not have seen otherwise. If you'd also be interested in that, please let me know by upvoting that idea here: Extra-like reaction (❤️ / 💎).)

      Everything is a Feed!

      To go along with the Likes feature, you can now see Popular Posts from across Scour and you can see what specific users have liked as well. And, naturally, all of those are feeds you can subscribe to — just visit the Popular Posts or a user's Likes page and click the Subscribe link.

      Show Feeds for a Post

      Under each post in your feed, you can now find a Show Feeds link that shows which feeds that post appeared in. Feeds you are subscribed to appear first (so you can unsubscribe if you want), followed by others that you could subscribe to if you want more content from that source. Thanks to Vahe Hovannisyan for the suggestion!

      UX Improvements and Bug Fixes

      • New sign-up flow that should be clearer and easier for new users. Thanks Vahe for talking through your experience as you were signing up 🫣!
      • You can now login with your email, in addition to with your username, which should help in case you forget your username. Thanks Allen for the idea!
      • Scour now deduplicates feed subscriptions so you won't see different versions of HN Newest appearing for the RSS and Atom feeds. Thanks Vahe for the feedback!
      • The UX for editing interests works better on mobile. Thanks to Cairin Michie for the feedback!
      • You can get an RSS version of your personalized feed with posts sourced from all feeds across Scour by appending ?all_feeds=true to your RSS link. Thanks Matt for the suggestion!
      • I fixed two bugs where posts wouldn't show up for users who hadn't subscribed to specific feeds, both in the RSS version of the feed and on the Search page. Thanks Dizzar and Joe for reporting these bug!
      • The UX for unsubscribing to feeds is clearer (an X button rather than a switch). Thanks Vahe for the feedback!

      Thanks again to everyone who submitted ideas, feedback, and bug reports! They help me figure out what to work on, so please try out these new features and let me know what else you'd like to see next!

      Happy scouring! - Evan

      P.S. Three posts I was especially happy to find through Scour this month were:

    2. 🔗 Cryptography & Security Newsletter Mozilla Fixes Certificate Revocation Checking rss

      You may recall from our January 2025 newsletter, which was dedicated to the demise of OCSP revocation checking (The Slow Death of OCSP), that Let’s Encrypt is planning to stop supporting OCSP in early May—only one month from now. Let’s Encrypt is the leading CA in terms of issued certificates, so its withdrawal from OCSP creates a problem for user agents that still rely on this method of revocation checking. This impending deadline may have spurned one such agent—Mozilla—to complete the outstanding work required to replace OCSP with a novel solution called CRLite.

    3. 🔗 @HexRaysSA@infosec.exchange We have 2 special promos for our IDA friends! mastodon

      We have 2 special promos for our IDA friends!

      🚀 Special Promo #1: Discounted License + Training Packages* SAVE 40%-55%!
      https://eu1.hubs.ly/H0hNc6W0

      🎁 Special Promo #2: Buy 1 Training Seat, Get 1 for FREE!*
      https://eu1.hubs.ly/H0hNc860

      Some restrictions apply, please review the eligibility requirements.
      Limited availability, grab it while you can

    4. 🔗 @trailofbits@infosec.exchange On [@secweekly](https://bird.makeup/users/secweekly) #323, mastodon

      On @secweekly #323,
      @securingdev evaluates the realistic applications of GenAI in security, comparing it to traditional tools like fuzzing and static analysis while identifying where human expertise remains irreplaceable.
      https://www.youtube.com/watch?si=zTtl2QVHxTEk_kSF&v=zn3LT4BqOJo&feature=youtu.be

    5. 🔗 mhx/dwarfs dwarfs-0.11.3 release

      Bugfixes

      • Handle absolute paths in --input-list. Fixes github #259.
      • Don't prefetch blocks that are already in the active list within the block cache.
      • Ensure that statistics for block tidying are correctly updated in the block cache.
      • A few build fixes, mainly to simplify building on Alpine.

      New Contributors

      Full Changelog : v0.11.2...v0.11.3

      SHA-256 Checksums

      de7f6609a4ddd6f2feff4cb4e43c4481515c5da178bbe12db24de9e7fec48bac  dwarfs-0.11.3-Linux-aarch64-clang-reldbg-stacktrace.tar.xz
      7c0835b89871e48025b2e30577fb1b3c39927f9b86940dd3c5d7c41871e12533  dwarfs-0.11.3-Linux-aarch64-clang.tar.xz
      2e771b53ebee66278b3a2e6e18fd04e20abc0f6defccb5a347dbfc2b7436b729  dwarfs-0.11.3-Linux-x86_64-clang-reldbg-stacktrace.tar.xz
      adc3fc58d36848a312e846f0e737056b7e406894e24fa20d80fcc476ca7f401f  dwarfs-0.11.3-Linux-x86_64-clang.tar.xz
      5ccfc293d74e0509a848d10416b9682cf7318c8fa9291ba9e92e967b9a6bb994  dwarfs-0.11.3.tar.xz
      33b3488bc1097b1b2b54194eaa5fb169dfded9a6046de6c4fee693d9a97ece32  dwarfs-0.11.3-Windows-AMD64.7z
      e14c0caa38a8d10273a84e57f532e513b2cbc50bb8df707b57c01d575f040a43  dwarfs-universal-0.11.3-Linux-aarch64-clang
      7d4857ee18ffae705a41f164a0a810f173bf8d69bc8bef8dcbd1018fa8287f6e  dwarfs-universal-0.11.3-Linux-aarch64-clang-reldbg-stacktrace
      64b349aec059b9d460211af6c517f6edd89e79c5e9581381229af745ebf3cc87  dwarfs-universal-0.11.3-Linux-x86_64-clang
      07a9ef68256e76e7bda552b24955a3db12c3b34312d7d664e47639995ccdabf1  dwarfs-universal-0.11.3-Linux-x86_64-clang-reldbg-stacktrace
      772f00d5d02fdebca4cbd74f4b1b37ee7b57fb3004a078c053b4f58e97a794ed  dwarfs-universal-0.11.3-Windows-AMD64.exe
      
    6. 🔗 sacha chua :: living an awesome life 2025-03-31 Emacs news rss

      Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, r/planetemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to Andrés Ramírez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!

      You can comment on Mastodon or e-mail me at sacha@sachachua.com.

    7. 🔗 Aider-AI/aider Aider v0.80.0 release
      • OpenRouter OAuth integration:

        • Offer to OAuth against OpenRouter if no model and keys are provided.
        • Select OpenRouter default model based on free/paid tier status if OPENROUTER_API_KEY is set and no model is specified.
        • Prioritize gemini/gemini-2.5-pro-exp-03-25 if GEMINI_API_KEY is set, and vertex_ai/gemini-2.5-pro-exp-03-25 if VERTEXAI_PROJECT is set, when no model is specified.
      • Validate user-configured color settings on startup and warn/disable invalid ones.

      • Warn at startup if --stream and --cache-prompts are used together, as cost estimates may be inaccurate.

      • Boost repomap ranking for files whose path components match identifiers mentioned in the chat.

      • Change web scraping timeout from an error to a warning, allowing scraping to continue with potentially incomplete content.

      • Left-align markdown headings in the terminal output, by Peter Schilling.

      • Update edit format to the new model's default when switching models with /model, if the user was using the old model's default format.

      • Add the openrouter/deepseek-chat-v3-0324:free model.

      • Add Ctrl-X Ctrl-E keybinding to edit the current input buffer in an external editor, by Matteo Landi.

      • Fix linting errors for filepaths containing shell metacharacters, by Mir Adnan ALI.

      • Add repomap support for the Scala language, by Vasil Markoukin.

      • Fixed bug in /run that was preventing auto-testing.

      • Fix bug preventing UnboundLocalError during git tree traversal.

      • Handle GitCommandNotFound error if git is not installed or not in PATH.

      • Handle FileNotFoundError if the current working directory is deleted while aider is running.

      • Fix completion menu current item color styling, by Andrey Ivanov.

      • Aider wrote 87% of the code in this release.

      Full change log:
      https://aider.chat/HISTORY.html

    8. 🔗 Aider-AI/aider v0.80.1.dev release

      set version to 0.80.1.dev

    9. 🔗 Szymon Kaliski Q1 2025 rss

      Motorizing External Blinds, Dry Filament, and Yearning for a Software Scope

    10. 🔗 matklad Random Numbers Included rss

      Random Numbers Included Mar 31, 2025

      I’ve recently worked on a PRNG API for TigerBeetle, and made a surprising discovery! While most APIs work best with “half-open” intervals, for(int i = 0; i < n; i++), it seems that random numbers really work best with closed intervals, ≤n.

      First , closed interval means that you can actually generate the highest- possible number:

      prng.range_inclusive(
          u32,
          math.intMax(u32) - 9,
          math.intMax(u32),
      );
      

      This call generates one of the ten largest u32s. With exclusive ranges, you’d have to generate u64 and downcast it.

      Second , close interval removes a possibility of a subtle crash. It is impossible to generate a random number less than zero, so exclusive APIs are panicky. This can crash!: rng.random_range(..n)

      Third , as a flip-side of the previous bullet point, by pushing the -1 to the call-site, you make it immediately obvious that there’s non-zero pre- condition:

      const replica = prng.int_inclusive(u8, replica_count - 1);
      
  2. March 30, 2025
    1. 🔗 astral-sh/uv 0.6.11 release

      Release Notes

      Enhancements

      • Add dependents ("via ..." comments) in uv export command (#12350)
      • Bump least-recent non-EOL macOS version to 13.0 (#12518)
      • Support --find-links-style "flat" indexes in [[tool.uv.index]] (#12407)
      • Distinguish between -q and -qq (#12300)

      Configuration

      • Support UV_PROJECT environment to set project directory. (#12327)

      Performance

      • Use a boxed slice for various requirement types (#12514)

      Bug fixes

      • Add a newline after metadata when initializing scripts with other metadata blocks (#12501)
      • Avoid writing empty requires-python to script blocks (#12517)
      • Respect build constraints in uv sync (#12502)
      • Respect transitive dependencies in uv tree --only-group (#12560)

      uv 0.6.11

      Install uv 0.6.11

      Install prebuilt binaries via shell script

      curl --proto '=https' --tlsv1.2 -LsSf https://github.com/astral-sh/uv/releases/download/0.6.11/uv-installer.sh | sh
      

      Install prebuilt binaries via powershell script

      powershell -ExecutionPolicy Bypass -c "irm https://github.com/astral-sh/uv/releases/download/0.6.11/uv-installer.ps1 | iex"
      

      Download uv 0.6.11

      File | Platform | Checksum
      ---|---|---
      uv-aarch64-apple-darwin.tar.gz | Apple Silicon macOS | checksum
      uv-x86_64-apple-darwin.tar.gz | Intel macOS | checksum
      uv-aarch64-pc-windows-msvc.zip | ARM64 Windows | checksum
      uv-i686-pc-windows-msvc.zip | x86 Windows | checksum
      uv-x86_64-pc-windows-msvc.zip | x64 Windows | checksum
      uv-aarch64-unknown-linux-gnu.tar.gz | ARM64 Linux | checksum
      uv-i686-unknown-linux-gnu.tar.gz | x86 Linux | checksum
      uv-powerpc64-unknown-linux-gnu.tar.gz | PPC64 Linux | checksum
      uv-powerpc64le-unknown-linux-gnu.tar.gz | PPC64LE Linux | checksum
      uv-s390x-unknown-linux-gnu.tar.gz | S390x Linux | checksum
      uv-x86_64-unknown-linux-gnu.tar.gz | x64 Linux | checksum
      uv-armv7-unknown-linux-gnueabihf.tar.gz | ARMv7 Linux | checksum
      uv-aarch64-unknown-linux-musl.tar.gz | ARM64 MUSL Linux | checksum
      uv-i686-unknown-linux-musl.tar.gz | x86 MUSL Linux | checksum
      uv-x86_64-unknown-linux-musl.tar.gz | x64 MUSL Linux | checksum
      uv-arm-unknown-linux-musleabihf.tar.gz | ARMv6 MUSL Linux (Hardfloat) | checksum
      uv-armv7-unknown-linux-musleabihf.tar.gz | ARMv7 MUSL Linux | checksum

      uv-build 0.6.11

      Install uv-build 0.6.11

      Install prebuilt binaries via shell script

      curl --proto '=https' --tlsv1.2 -LsSf https://github.com/astral-sh/uv/releases/download/0.6.11/uv-build-installer.sh | sh
      

      Install prebuilt binaries via powershell script

      powershell -ExecutionPolicy Bypass -c "irm https://github.com/astral-sh/uv/releases/download/0.6.11/uv-build-installer.ps1 | iex"
      

      Download uv-build 0.6.11

      File | Platform | Checksum
      ---|---|---
      uv-build-aarch64-apple-darwin.tar.gz | Apple Silicon macOS | checksum
      uv-build-x86_64-apple-darwin.tar.gz | Intel macOS | checksum
      uv-build-aarch64-pc-windows-msvc.zip | ARM64 Windows | checksum
      uv-build-i686-pc-windows-msvc.zip | x86 Windows | checksum
      uv-build-x86_64-pc-windows-msvc.zip | x64 Windows | checksum
      uv-build-aarch64-unknown-linux-gnu.tar.gz | ARM64 Linux | checksum
      uv-build-i686-unknown-linux-gnu.tar.gz | x86 Linux | checksum
      uv-build-powerpc64-unknown-linux-gnu.tar.gz | PPC64 Linux | checksum
      uv-build-powerpc64le-unknown-linux-gnu.tar.gz | PPC64LE Linux | checksum
      uv-build-s390x-unknown-linux-gnu.tar.gz | S390x Linux | checksum
      uv-build-x86_64-unknown-linux-gnu.tar.gz | x64 Linux | checksum
      uv-build-armv7-unknown-linux-gnueabihf.tar.gz | ARMv7 Linux | checksum
      uv-build-aarch64-unknown-linux-musl.tar.gz | ARM64 MUSL Linux | checksum
      uv-build-i686-unknown-linux-musl.tar.gz | x86 MUSL Linux | checksum
      uv-build-x86_64-unknown-linux-musl.tar.gz | x64 MUSL Linux | checksum
      uv-build-arm-unknown-linux-musleabihf.tar.gz | ARMv6 MUSL Linux (Hardfloat) | checksum
      uv-build-armv7-unknown-linux-musleabihf.tar.gz | ARMv7 MUSL Linux | checksum

    2. 🔗 News Minimalist Project update + 2 significant news stories rss

      I have a big update to share: News Minimalist now covers news in 21 more languages! Read the full announcement after the news.


      Today ChatGPT read 30812 top news stories. After removing previously covered events, there are 2 articles with a significance score over 5.9.

      [6.1] Unmanned rocket explodes 40 seconds after launch in Norway —cbsnews.com

      The unmanned Spectrum rocket, developed by Isar Aerospace, exploded 40 seconds after launching from Norway's Andoya Spaceport on March 30, 2025. This was the first attempt at an orbital rocket launch from the European continent, excluding Russia.

      Despite the explosion, Isar Aerospace deemed the test flight a success, as it met its goals of achieving a clean liftoff and validating its systems. Prior to the launch, the company had downplayed expectations, stating that no new orbital rockets had successfully reached orbit.

      This launch was particularly significant as it was primarily funded by the private sector. Previous European attempts to launch rockets into orbit have typically involved government-supported entities, such as the European Space Agency.

      [6.3] Hamas agrees to Gaza ceasefire; Israel counters —abc.net.au

      Hamas has accepted a new ceasefire proposal for Gaza from Egypt and Qatar. This proposal includes Hamas releasing five hostages in exchange for Israel allowing aid into Gaza and pausing fighting.

      Israel has responded with a counter-proposal, announced through Prime Minister Benjamin Netanyahu's office, but no details were provided. Recent fighting resumed after Israel launched airstrikes, killing many in Gaza.

      The conflict has resulted in over 50,000 deaths in Gaza, marking the highest toll in any conflict with Israel in more than 40 years. Families of hostages in Israel are protesting, calling for a resolution that would ensure their loved ones are returned safely.

      Highly covered news with significance over 5.5

      [5.5] Myanmar earthquake death toll rises to 1,644
      (apnews.com + 923)

      [5.8] Trump to announce new tariffs on April 2
      (news.yahoo.com + 292)

      [5.6] U.S. military focuses solely on China as threat
      (t-online.de + 14)

      [5.5] Cuts in foreign aid may cause 3M HIV deaths
      (politico.eu + 7)

      [5.6] Gaia space telescope officially retires after decade-long mission
      (space.com + 5)

      [5.8] New Syrian government includes first woman minister
      (rr.sapo.pt + 39)


      Since its inception, News Minimalist has only covered news in English. This gave a good enough coverage of events, since most significant events are usually covered in English.

      But it also created a bias toward English-speaking countries, giving us a somewhat skewed perspective on the world.

      With the addition of new languages, we’ll get much better coverage of perspectives. We’ll see how major events are viewed in other parts of the world. A couple of stories today already came from non-English sources.

      The new update supports 21 more languages : Arabic, Chinese, Dutch, French, German, Greek, Hebrew, Hindi, Italian, Japanese, Malayalam, Marathi, Norwegian, Portuguese, Romanian, Russian, Spanish, Swedish, Tamil, Telugu, and Ukrainian.

      The change almost doubled the number of stories processed each day from 15k to 30k. It also increased the number of stories in high-significance range (5+) from 30 to 50.

      All the articles are translated and summarized in English — so you don’t need to do it yourself.

      I’ve wanted to do this since the beginning and am very happy to finally have it done.

      Please let me know if you have any feedback, find any issues with translations or want me to add more languages.

      Thank you!

      — Vadim


      P.S.: This update considerably improved local coverage in non-English speaking countries. You can choose countries and set individual significance thresholds on premium.


      Powered by beehiiv

    3. 🔗 sacha chua :: living an awesome life Moving 18 years of comments out of Disqus and into my 11ty static site rss

      Assumed audience: Technical bloggers who like:

      • static site generators: this post is about moving more things into my SSG
      • XML: check out the mention of xq, which offers a jq-like interface
      • or Org Mode: some notes here about Org Babel source blocks and graphing

      I've been thinking of getting rid of the Disqus blog commenting system for a while. I used to use it in the hopes that it would handle spam filtering and the "someone has replied to your comment" notification for me. Getting rid of Disqus means one less thing that needs Javascript, one less thing that tracks people in ways we don't want, one less thing that shows ads and wants to sell our attention. Comments are rare enough these days, so I think I can handle e-mailing people when there are replies.

      There are plenty of alternative commenting systems to choose from. Comentario and Isso are self-hosted, while Commento (USD 10/month) and Hyvor Talk (12 euro/month) are services. Utterances uses Github issues, which is probably not something I'll try as quite a few people in the Emacs community are philosophically opposed to Github. Along those lines, if I can find something that works without Javascript, that would be even better.

      I could spend a few years trying to figure out which system I might like in terms of user interface, integration, and spam-filtering, but for now, I want to:

      Fortunately, there's 11ty/eleventy-import-disqus (see zachleat's blog post: Import your Disqus Comments to Eleventy)

      Exploring my disqus.xml with xq, Org Babel, and seaborn

      One challenge: there are a lot of comments. How many? I got curious about analyzing the XML, and then of course I wanted to do that from Emacs. I used pipx install yq to install yq so that I could use the xq tool to query the XML, much like jq works.

      My uncompressed Disqus XML export was 28MB. I spent some time deleting spam comments through the web interface, which helped with the filtering. I also deleted some more comments from the XML file as I noticed them. I needed to change /wp/ to /blog/, too.

      This is how I analyzed the archive for non-deleted posts, uniquified based on message. I'll include the full Org source of that block (including the header lines) in my blog post so that you can see how I call it later.

      #+NAME: analyze-disqus
      #+begin_src shell :var rest="| length | \"\\(.) unique comments\"" :exports results
      ~/.local/bin/xq -r "[.disqus.post[] |
         select(.isDeleted != \"true\" and .message) |
         {key: .message, value: .}] |
        map(.value) |
        unique_by(.message) ${rest}" < disqus.xml
      #+end_src
      

      When I evaluate that with C-c C-c, I get:

      8265 unique comments

      I was curious about how it broke down by year. Because I named the source code block and used a variable to specify how to process the filtered results earlier, I can call that with a different value.

      Here's the call in my Org Mode source:

      #+CALL: analyze-disqus(rest="| map(.createdAt[0:4]) | group_by(.) | map([(.[0]), length]) | reverse | [\"Year\", \"Count\"], .[] | @csv") :results table output :wrap my_details Table of comment count by year
      
      Table of comment count by year
      Year Count
      2025 26
      2024 43
      2023 34
      2022 40
      2021 55
      2020 131
      2019 107
      2018 139
      2017 186
      2016 196
      2015 593
      2014 740
      2013 960
      2012 784
      2011 924
      2010 966
      2009 1173
      2008 1070
      2007 98

      I tried fiddling around with Org's #+PLOT keyword, but I couldn't figure out how to get the bar graph the way I wanted it to be. Someday, if I ever figure that out, I'll definitely save the Gnuplot setup as a snippet. For now, I visualized it using seaborn instead.

      Code for graphing comments by year
      import pandas as pd
      import seaborn as sns
      import matplotlib.pyplot as plt
      import numpy as np
      
      df = pd.DataFrame(data[1:], columns=data[0])
      df['Count'] = df['Count'].astype(int)
      df['Year'] = df['Year'].astype(int)
      df = df.sort_values('Year')
      plt.figure(figsize=(12, 6))
      ax = sns.barplot(x='Year', y='Count', data=df)
      plt.title('Comments by Year (2007-2025)', fontsize=16, fontweight='bold')
      plt.xlabel('Year')
      plt.ylabel('Comments')
      plt.xticks(rotation=45)
      plt.grid(axis='y')
      for i, v in enumerate(df['Count']):
          ax.text(i, v + 20, str(v), ha='center', fontsize=9)
      plt.tight_layout()
      plt.savefig('year_count_plot.svg')
      return 'year_count_plot.svg'
      
      year_count_plot.svg

      Ooooooh, I can probably cross-reference this with the number of posts from my /blog/all/index.json file. I used Claude AI's help to come up with the code below, since merging data and plotting them nicely is still challenging for me. Now that I have the example, though, maybe I can do other graphs more easily. (This looks like a related tutorial on combining barplots and lineplots.)

      Code for graphing
      import pandas as pd
      import seaborn as sns
      import matplotlib.pyplot as plt
      import numpy as np
      import json
      from matplotlib.ticker import FuncFormatter
      from datetime import datetime
      
      with open('/home/sacha/proj/static-blog/_site/blog/all/index.json', 'r') as f:
          posts_data = json.load(f)
      
      # Process post data
      posts_df = pd.DataFrame(posts_data)
      posts_df['Year'] = pd.to_datetime(posts_df['date']).dt.year
      post_counts = posts_df.groupby('Year').size().reset_index(name='post_count')
      
      # Convert to DataFrame
      comments_df = pd.DataFrame(comment_data[1:], columns=comment_data[0])
      comments_df['Count'] = comments_df['Count'].astype(int)
      comments_df['Year'] = comments_df['Year'].astype(int)
      
      # Merge the two dataframes
      merged_df = pd.merge(post_counts, comments_df, on='Year', how='outer').fillna(0)
      merged_df = merged_df.sort_values('Year')
      
      # Calculate comments per post ratio
      merged_df['comments_per_post'] = merged_df['Count'] / merged_df['post_count']
      merged_df['comments_per_post'] = merged_df['comments_per_post'].replace([np.inf, -np.inf], np.nan).fillna(0)
      
      # Create a single figure instead of two subplots
      fig, ax1 = plt.subplots(figsize=(15, 8))
      
      # Custom colors
      post_color = "#1f77b4"    # blue
      comment_color = "#ff7f0e" # orange
      ratio_color = "#2ca02c"   # green
      
      # Setting up x-axis positions
      x = np.arange(len(merged_df))
      width = 0.35
      
      # Bar charts on first y-axis
      bars1 = ax1.bar(x - width/2, merged_df['post_count'], width, color=post_color, label='Posts')
      bars2 = ax1.bar(x + width/2, merged_df['Count'], width, color=comment_color, label='Comments')
      ax1.set_ylabel('Count (Posts & Comments)', fontsize=12)
      
      # Add post count values above bars
      for i, bar in enumerate(bars1):
          height = bar.get_height()
          if height > 0:
              ax1.text(bar.get_x() + bar.get_width()/2., height + 5,
                      f'{int(height)}', ha='center', va='bottom', color=post_color, fontsize=9)
      
      # Add comment count values above bars
      for i, bar in enumerate(bars2):
          height = bar.get_height()
          if height > 20:  # Only show if there's enough space
              ax1.text(bar.get_x() + bar.get_width()/2., height + 5,
                      f'{int(height)}', ha='center', va='bottom', color=comment_color, fontsize=9)
      
      # Line graph on second y-axis
      ax2 = ax1.twinx()
      line = ax2.plot(x, merged_df['comments_per_post'], marker='o', color=ratio_color,
                    linewidth=2, label='Comments per Post')
      ax2.set_ylabel('Comments per Post', color=ratio_color, fontsize=12)
      ax2.tick_params(axis='y', labelcolor=ratio_color)
      ax2.set_ylim(bottom=0)
      
      # Add ratio values near line points
      for i, ratio in enumerate(merged_df['comments_per_post']):
          if ratio > 0:
              ax2.text(i, ratio + 0.2, f'{ratio:.1f}', ha='center', color=ratio_color, fontsize=9)
      
      # Set x-axis labels
      ax1.set_xticks(x)
      ax1.set_xticklabels(merged_df['Year'], rotation=45)
      ax1.set_title('Blog Posts, Comments, and Comments per Post by Year', fontsize=16, fontweight='bold')
      ax1.grid(axis='y')
      
      # Add combined legend
      lines1, labels1 = ax1.get_legend_handles_labels()
      lines2, labels2 = ax2.get_legend_handles_labels()
      ax1.legend(lines1 + lines2, labels1 + labels2, loc='upper left')
      
      # Layout and save
      plt.tight_layout()
      plt.savefig('posts_comments_analysis.svg')
      return 'posts_comments_analysis.svg'
      
      posts_comments_analysis.svg

      Timeline notes:

      • In this graph, comments are reported by the timestamp of the comment, not the date of the post.
      • In 2007 or so, I moved to Wordpress from planner-rss.el. I think I eventually imported those Wordpress comments into Disqus when I got annoyed with Wordpress comments (Akismet? notifications?).
      • In 2008 and 2009, I was working on enterprise social computing at IBM. I made a few presentations that were popular. Also, mentors and colleagues posted lots of comments.
      • In 2012, I started my 5-year experiment with semi-retirement.
      • In 2016, A+ was born, so I wrote much fewer posts.
      • In 2019/2020, I wrote a lot of blog posts documenting how I was running EmacsConf with Emacs, and other Emacs tweaks along the way. The code is probably very idiosyncratic (… unless you happen to know other conference organizers who like to do as much as possible within Emacs? Even then, there are lots of assumptions in the code), but maybe people picked up useful ideas anyway. =)

      What were my top 20 most-commented posts?

      Emacs Lisp code for most-commented posts
      (let* ((json-object-type 'alist)
             (json-array-type 'list)
             (comments-json (json-read-file "~/proj/static-blog/_data/commentsCounts.json"))
             (posts-json (json-read-file "~/proj/static-blog/_site/blog/all/index.json"))
             (post-map (make-hash-table :test 'equal)))
        ;; map permalink to title
        (dolist (post posts-json)
          (let ((permalink (cdr (assoc 'permalink post)))
                (title (cdr (assoc 'title post))))
            (puthash permalink title post-map)))
        ;; Sort comments by count (descending)
        (mapcar
         (lambda (row)
           (list
            (cdr row)
                  (org-link-make-string
             (concat "https://sachachua.com" (symbol-name (car row)))
             (with-temp-buffer
               (insert (or (gethash (symbol-name (car row)) post-map) (symbol-name (car row))))
               (mm-url-decode-entities)
               (buffer-string)))))
         (seq-take
          (sort comments-json
                (lambda (a b) (> (cdr a) (cdr b))))
          n)))
      
      97 blog/contact
      88 Even more awesome LotusScript mail merge for Lotus Notes + Microsoft Excel
      75 blog/about
      45 How to Learn Emacs: A Hand-drawn One-pager for Beginners / A visual tutorial
      42 Planning an Emacs-based personal wiki – Org? Muse? Hmm…
      38 Married!
      37 Moving from testing to development
      36 What can I help you learn? Looking for mentees
      33 Lotus Notes mail merge from a Microsoft Excel spreadsheet
      30 Nothing quite like Org for Emacs
      30 Org-mode and habits
      29 zomg, Evernote and Emacs
      25 Literate programming and my Emacs configuration file
      25 Reinvesting time and money into Emacs
      23 The Gen Y Guide to Web 2.0 at Work
      22 Drupal: Overriding Drupal autocompletion to pass more parameters
      21 Rhetoric and the Manila Zoo; reflections on conversations and a request for insight
      20 This is a test post from org2blog
      19 Agendas
      19 Paper, Tablet, and Tablet PC: Comparing tools for sketchnoting

      Top 3 by year. Note that this goes by the timestamp of the post, not the comment, so even old posts are in here.

      Emacs Lisp code for most-commented posts by year
      (let* ((json-object-type 'alist)
             (json-array-type 'list)
             (comments-json (json-read-file "~/proj/static-blog/_data/commentsCounts.json"))
             (posts-json (json-read-file "~/proj/static-blog/_site/blog/all/index.json"))
             by-year)
        (setq posts-json
              (mapcar
               (lambda (post)
                 (let ((comments (alist-get (intern (alist-get 'permalink post)) comments-json)))
                   (if comments
                       (cons (cons 'comments (alist-get (intern (alist-get 'permalink post)) comments-json 0))
                             post)
                     post)))
               posts-json))
        (setq by-year
              (seq-group-by
               (lambda (o)
                 (format-time-string "%Y"
                                     (date-to-time
                                      (alist-get 'date o))
                                     "America/Toronto"))
               (seq-filter (lambda (o) (alist-get 'comments o)) posts-json)))
        (org-list-to-org
         (cons 'unordered
               (seq-keep
                (lambda (year)
                  (list
                   (org-link-make-string (concat "https://sachachua.com/blog/" (car year))
                                         (car year))
                   (cons 'unordered
                         (mapcar
                          (lambda (entry)
                            (list (format "%s (%d)"
                                          (org-link-make-string
                                           (concat "https://sachachua.com" (alist-get 'permalink entry))
                                           (with-temp-buffer
                                             (insert (alist-get 'title entry))
                                             (mm-url-decode-entities)
                                             (buffer-string)))
                                          (alist-get 'comments entry))))
                          (seq-take
                           (sort
                            (cdr year)
                            (lambda (a b) (> (alist-get 'comments a)
                                             (alist-get 'comments b))))
                           n)))))
                (nreverse by-year)))))
      

      As you can probably tell, I love writing about Emacs, especially when people drop by in the comments to:

      • share that they'd just learned about some small thing I mentioned in passing and that it was really useful for this other part of their workflow that I totally wouldn't have guessed
      • point out a simpler package or built-in Emacs function that also does whatever clever hack I wrote about, just in a more polished way
      • link to a blog post or code snippet where they've borrowed the idea and added their own spin

      I want to keep having those sorts of conversations.

      Deleting spam comments via the Disqus web interface and Spookfox

      8000+ comments are a lot to read, but it should be pretty straightforward to review the comments at least until 2016 or so, and then just clean out spam as I come across it after that. I used the Disqus web interface to delete spam comments since the isSpam attribute didn't seem to be reliable. The web interface pages through comments 25 items at a time and doesn't seem to let you select all of them, so I started tinkering around with using Spookfox to automate this. Spookfox lets me control Mozilla Firefox from Emacs Lisp.

      (progn
        ;; select all
        (spookfox-eval-js-in-active-tab "document.querySelector('.mod-bar__check input').click()")
        (wait-for 1)
        ;; delete
        (spookfox-eval-js-in-active-tab "document.querySelectorAll('.mod-bar__button')[2].click()")
        (wait-for 2)
        ;; click OK, which should make the list refresh
        (spookfox-eval-js-in-active-tab "btn = document.querySelectorAll('.mod-bar__button')[1]; if (btn.textContent.match('OK')) btn.click();")
        (wait-for 4)
        ;; backup: (spookfox-eval-js-in-active-tab "window.location.href = 'https://sachac.disqus.com/admin/moderate/spam'")
        )
      

      I got to the end of the spam comments after maybe 10 or 20 pages, though, so maybe Disqus had auto-deleted most of the spam comments.

      It's almost amusing, paging through all these spammy attempts at link-building and product promotion. I didn't want to click on any of the links since there might be malware, so sometimes I used curl to check the site. Most of the old spam links I checked don't even have working domains any more. Anything that needed spam didn't really have lasting power. It was all very "My name is Ozymandias, king of kings: / Look on my works, ye Mighty, and despair!"… and then gone.

      Modifying eleventy-import-disqus for my site

      Back to eleventy-import-disqus. I followed the directions to make a contentMap.json and removed the trailing , from the last entry so that the JSON could be parsed.

      Modifications to eleventy-import-disqus:

      • The original code created all the files in the same directory, so I changed it to create the same kind of nested structure I use (generally ./blog/yyyy/mm/post-slug/index.html and ./blog/yyyy/mm/post-slug/index.11tydata.json). I decided to store the Disqus comments in index.json, which is lower-priority than .11tydata.json. fs-extra made this easier by creating all the parent directories.
      • Ignored deleted messages
      • Discarded avatars
      • Did some reporting to help me review potential spam
      • Reparented messages if I deleted their parent posts
      • Indent the thread JSON nicely in case I want to add or remove comments by hand

      With the thread JSON files, my blog takes 143 seconds to generate, versus 133 seconds without the comments. +10 seconds isn't too bad. I was worried that it would be longer, since I added 2,088 data JSON files to the build process, but I guess 11ty is pretty efficient.

      Next steps

      It had been nice to have a comment form that people could fill in from anywhere and which shared their comments without needing my (often delayed) intervention. I learned lots of things from what people shared. Sometimes people even had discussions with each other, which was extra cool. Still, I think it might be a good time to experiment with alternatives. Plain e-mail for now, I guess, maybe with a nudge asking people if I could share their comments. Mastodon, too - could be fun to make it easy to add a toot to the static comments from mastodon.el or from my Org Mode inbox. (Update 2025-03-30: Adding Mastodon toots as comments in my 11ty static blog) Might be good to figure out Webmentions, too. (But then other people have been dealing with spam Webmentions, of course.)

      Comment counts can be useful social signals for interesting posts. I haven't added comment counts to the lists of blog posts yet. eleventy-import-disqus created a commentsCounts.json, which I could use in my templates. However, I might change the comments in the per-post .json file if I figure out how to include Mastodon comments, so I may need to update that file or recalculate it from the posts.

      Many of the blogs I read have shifted away from commenting systems, and the ones who still have comments on seem to be bracing for AI-generated comment spam. I'm not sure I like the way the Internet is moving, but maybe in this little corner, we can still have conversations across time. Comments are such a wonderful part of learning out loud. I wonder how we can keep learning together.

      You can comment on Mastodon, view 8 comments, or e-mail me at sacha@sachachua.com.

    4. 🔗 Register Spill Joy & Curiosity #33 rss

      At the start of this week I read this post by Steven Sinofsky on tech going hardcore again in which he quotes an older article of his, from 2005, that refers to this 1989 book called Programmers At Work, whose subtitle reveals what's inside: "Interviews With 19 Programmers Who Shaped the Computer Industry". I had read Coders at Work many years ago, but didn't know there was a spiritual predecessor to it.

      So I immediately went and bought it used.

      Then, a few days later, I re-discover a part of long-living file on my computer called "Mission Statement.md" that contains my Programming Principles. On line 237 I wrote the following: "The history of the software is as important as its future."

      Then, yesterday, the used copy of Programmers At Work arrives here. I open it and here's the first paragraph I lay my eyes on, from the interview with Charles Simonyi:

      INTERVIEWER: What was your first professional program?

      SIMONYI: The first professional program that I wrote was a compiler for a very simple, FORTRAN-like, high-level language. I sold it to a state organization as an innovation and made a fair amount of money, none of which I ever spent, since I left Hungary soon after.

      Way to start your career, eh?

      I browse through the book and end up on this paragraph, from the interview with Andy Hertzfeld:

      HERTZFELD: I found I had a talent for programming. A computer gives an amazing feeling of control and power to a kid. To think of something, and then get the computer to do what you thought of, was such a great feeling. It always has been. That's what attracted me to the field. Learning to program is like leaning to ride a bicycle; you can't read books about it. You have to do it.

      For everything that's changed since 1989, this hasn't, has it? You can still think of something and get the computer to do what you thought of. And it's still a great feeling.

      • From Sinofsky's post: "Here we are in 2025, with all the companies having gone through layoffs, reduced benefits, and the vibe shift as some might say about focus on execution, delivering, and prioritization of important work. I think history will record that post-bubble era of perks and 'Willy Wonka' as the aberration and what we are seeing today as the best practice for innovation."

      • Let's start with the quote and you have to guess the name of the article: "Computer architecture isn't telling a machine what to do. It's establishing the possibility that it can be told anything at all. The work is superhuman, if not fully alien. Put it this way: If you found the exact place in a human being where matter becomes mind, where body becomes soul--a place that no scientist or philosopher or spiritual figure has found in 5,000 years of frantic searching--wouldn't you tread carefully? One wrong move and everything goes silent." No, sorry, your guess is wrong, the article is called Angelina Jolie Was Right About Computers and it's about RISC-V. Before reading, I knew what RISC was and I had heard about RISC-V, but didn't really have a clue. The article put some meat on those acronymical bones.

      • ThePrimeagen was on the Lex Fridman podcast and they spoke for at least 5h20m, which is quite a lot and also why I haven't listened to the whole thing yet. But I do love Prime and it made me very happy to see that he's been given such a platform. He's one of the nicest, kindest people I've had the luck of meeting online. Now, that being said: I do love Lex's view on programming with AI here, it's exactly how I feel about it, and I'm prepared to also record a 5h20m podcast with Prime in which I make him see the beauty in all of this.

      • This is very, very good: A Field Guide to Rapidly Improving AI Products. Point two, about simple data viewers, is spot on in my experience. Best part is that it's so easy to build them now.

      • As someone who once wanted to be a magazine writer just like this one, I loved Bryan Burrough's review of Graydon Carter's memoir about his 25-year run as Vanity Fair's editor. It's both a review of a memoir and a mini-memoir itself. It's wonderful writing. While reading, I wrote in a note "urls!!" because this is one of the very, very, very rare articles that links to other articles it discusses. For example, it links to this 1999 article by Burrough's about Bernard Arnault's attempted take-over of Gucci with the great name "Gucci And Goliath". If you've listened to the (fantastic) AcquiredFM episode on LVMH you know the backstory. But, back to Burrough's article on Carter, isn't this what working relationships should be about: "A connection of sorts was forged. After that, Graydon began calling me regularly; over the next fifteen years, probably three of every four stories I wrote were his ideas. I told myself, grandly, that I was 'Graydon's guy.' I'm sure others thought something similar. His story ideas were simple. They often consisted of a single word. If, say, Rupert Murdoch was involved in something scandalous, he would call and say, 'Wanna do Murdoch?' We both knew what he meant, what I was to deliver."

      • Surprisingly, the second Wired article I read this week that was also good and also about something I knew but not really: Inside arXiv -- the Most Transformative Platform in All of Science. It has a very spicy paragraph in it that you need to read (you'll know) and this line: "In 2021, the journal Nature declared arXiv one of the '10 computer codes that transformed science,' praising its role in fostering scientific collaboration. (The article is behind a paywall--unlock it for $199 a year.)"

      • This article had its ten year anniversary this week, which is how I found it. Read it, please, I'm convinced you'll be happy you did: I Played 'The Boys Are Back in Town' on a Bar Jukebox Until I Got Kicked Out.

      • While we're on the topic of "you'll be happy you read this", let's go back eleven years further into the past and read this Onion article from 2004 that should be carved into stone because every sentence (I swear: every sentence) in it is worth it: Fuck Everything, We're Doing Five Blades.

      • I wish I understood half of this post on Scarcity and Abundance in 2025. The parts that I do understand (or, at least, think I do) are insightful: "If you try to put your finger on what exactly distinguishes the products that feel magical from those that feel like slop, you'll notice something interesting. The pointless incumbent products - Genmoji or Text Summaries from Apple, 'contextual' anything from Google and Microsoft, that kind of stuff - express an ethos of maximizing the value of a software asset of some kind. There's a whiff of attitude where "the codebase is the capital", and the point of all these AI tools is to keep drilling for undiscovered value in the asset. Contrast this to everywhere I see people ravenously using new tools, like Cursor for coding, OpenAI Operator, or even fairly 'basic' uses like lawyers using NotebookLM to summarize case documents in a way they can listen to in the car. There is no concept of an 'asset' here; the value of the product to the user does not really depend on rich context, network effects, or some other obvious software incumbency. The software is just doing work , and the work is tangibly value-additive, even if it requires some human supervision." I'm not sure whether I can really, clearly point out the distinction, but I think I can feel it?

      • Thanks to the previous link I found out that Dwarkesh Patel wrote a book. It's called The Scaling Era and I want you to turn sound on and go to this landing page. Then scroll and zoom and click & rotate and… yes. I love it, man. So, I bought the Kindle version (even though I'm still reading The Power Broker) and here's the intro: "There's a Sherlock Holmes story that captures our relationship with large language models. A new client comes to Baker Street. With a single glance, Holmes rattles off the man's life story: that he lived in China, that he is a Freemason, that he writes a lot. The client, astonished, asks how Holmes knows all this. In great detail, Holmes explains the series of deductions that led him to his conclusions. The client responds, 'I thought at first that you had done something clever, but I see that there was nothing in it, after all.'"

      • "We forgot how not to spy and steal attention" -- good reminder, even if a bit shallow in places.

      • This article on v8 "leaving the Sea of Nodes" is one of the best articles I've ever read on compiler IRs and, believe me, I tried to find nearly everyone I could a few years ago. When I was building my optimizing compiler, I read Cliff Click's PhD thesis (Cliff Click, btw., is probably the greatest name in computers and the person who came up with Sea of Nodes) and couldn't make a lot of sense of it. I read the other paper and, I don't know, kinda gave up on it and then went with a CFG in SSA-form. Now, I'm wondering, how much my decision was affected by not having articles that are this clear -- look at the examples! the graphs! -- and, in general, how many technical decisions are made like this.

      • "If you're new to tech - say, less than 5 years in the field - you should take career advice from people who've been in the industry more than 10-15 years with enormous skepticism."

      • Very good post by Michael Lynch on How to Write Blog Posts that Developers Read. I want to highlight one line here and disagree with half of it: "You can also use free stock photos and AI-generated images, as they're better than nothing, but they're worse than anything else, including terrible MS Paint drawings." No, please, do not use stock photos in your blog. Never in the history of blogging has someone stopped and thought to themselves "huh, wow, that's a good stock photo choice here" and no one ever will.

      • Ben Thompson interviewed Sam Altman. I caught myself thinking "that's not as batshit crazy sounding as it did a few years ago" when reading this: "Where I think there's strategic edges, there's building the giant Internet company. I think that should be a combination of several different key services. There's probably three or four things on the order of ChatGPT, and you'll want to buy one bundled subscription of all of those. You'll want to be able to sign in with your personal AI that's gotten to know you over your life, over your years to other services and use it there. There will be, I think, amazing new kinds of devices that are optimized for how you use an AGI. There will be new kinds of web browsers, there'll be that whole cluster, someone is just going to build the valuable products around AI."

      • Craig Mod launched a "a new members-only social network called 'The Good Place'": "It's no exaggeration to say that using Claude Code to build The Good Place (and also a bunch of other small tools and projects) is one of the most astonishing computing experiences of my life. It's difficult to articulate how utterly empowering a tool like Claude Code (paired with malleable software, open software, open systems (i.e., not iOS/iPadOS)) is for someone like me." Read on for his thoughts on social media and his social network. It's very worth it.

      • I have to admit: I'm a sucker for the word agency and I'm a sucker for anecdotes. No surprise then that I fell really hard for this, yes, website : highagency.com. Yes, it's a single page that you can scroll through. How wonderful is that? That old saying of "this book should've been a blog post"? Well, this website is the opposite: it's a website that knows exactly what it is and isn't.

      If you started listening to The Boys Are Back In Town while reading this, you should subscribe:

      Bonus: this is a page from Programmers At Work and shows the wonderfully silly IconBounce program by Andy Hertzfeld:

      And, yes , the appendix does contain the program:

    5. 🔗 matklad Tariffs rss

      Tariffs Mar 30, 2025

      A farmhand and a composer are reading Pravda, a transcript of Khrushchev’s speech about music. Farmhand sums up: “You have it better, music he at least understands!”

      An old Soviet joke.

      Off-topic post about economics. I have no idea what I am talking about here, but I do have some intrusive thoughts about tariffs.

      First , optimal amount of tariffs might be non-zero. In terms of efficiency, tariffs are a pure loss: primarily through distortions to optimal allocation of resources, but also through bureaucratic overhead to enforce the tariffs themselves. But it seems to me that an ideal, frictionless world with zero tariffs would see extreme centralization in every industry. This seems bad from the resilience perspective — small, local shocks could lead to global disruptions. Tariffs should produce some amount of decentralization, which, while inefficient, is less sensitive to initial conditions.

      Second , tariffs might lead to more automation. My understanding is that a big driver for moving industries around are labor costs. Human labor is a major input of manufactured goods, and wages are a significant component of costs. There are two ways to cut the costs: either you figure out a way to make the product with less labor, or you move the production where the labor is cheap. Tariffs and minimal wage makes labor costlier, pushing towards reducing the amount of work through better efficiency.

      Third , tariffs might be politically sticky, via public choice theory. Tariffs create concentrated interests, which tend to wield more political power than diffused public interest. It is easier for a single tariff- protected industry to coordinate to lobby maintaining the tariffs, than it is for everyone else to coordinate to repel the tariffs, even though tariffs might be net-negative.

      P.S. While undoubtedly prompted by the world outside, this post is explicitly not a commentary on any specific events, real or imaginary.

    6. 🔗 matklad Deno Simple Server Side Rendering rss

      Deno Simple Server Side Rendering Mar 30, 2025

      I’ve finally cleared a bit of technical debt I had in the implementation of this blog. I don’t use a templating engine, and instead define all of the templates in code. JavaScript template literals (backtick strings) make this relatively nice:

      return html`
          <time ${cls ? `class="${cls}"` : ""} datetime="${machine}">
              ${human}
          </time>`;
      

      Still, given that Deno comes with JSX “out of the box”, an even better approach should be theoretically possible:

      return <time class={className} datetime={machine}>{human}</time>;
      

      In practice, this is needlessly fiddly, at least for someone like me, who haven’t used JSX before. As far as I can tell, there isn’t anything built into Deno or Deno’s std, that would allow me to write code like this:

      const html: string = render_to_string(<div>
          Hello, world!
      </div>);
      

      The suggestion is to use some library, and that increases annoyance manyfold: which library? which cdn/registry should I be pulling it from? how can I vendor it without adding a hundred of auxiliary files?

      While Deno in general is refreshingly out-of-the-box, the “I want to render an HTML tree into a string” part decidedly is not.

      Fundamentally, the JSR library is easy — it is just some boilerplate tree constructing code. There’s a dozen micro JSX libraries out there, you can pick one. Alternatively, you can ask your local LLM to give you a single-file no- dependencies thing, it’ll probably do a decent job. My version is here, less than 100 lines of code: tsx.ts:

      To use this file, I add

      /** @jsx h */
      /** @jsxFrag Fragment */
      import { escapeHtml, h, Raw, render, VNode } from "./tsx.ts";
      

      at the start of the templates.tsx file with my templates. No changes to deno.jsonc are required.

      One thing which you don’t get for free with this ad hoc implementation is nice formatting of HTML. I’d love the source to be at least somewhat readable. Luckily, deno the command line tool comes with html formatter out of the box, so you could use that to prettify the results:

      const t_fmt = performance.now();
      const { success } = await new Deno.Command(Deno.execPath(), {
        args: ["fmt", "./out/www"],
      }).output();
      if (!success) throw "deno fmt failed";
      ctx.fmt_ms = performance.now() - t_fmt;
      

      Running this on every file is much too slow, but reformatting everything at the end is fast enough!

  3. March 29, 2025
    1. 🔗 sacha chua :: living an awesome life Week ending March 28, 2025: mastodon.el tweaks, search, workflows rss
      • I've been practising fretting less about homework.
      • I added an On this day page to my blog. (blog post about it)
      • I added Mastodon links to my blog. I think the process will be: post the blog post; toot to Mastodon; edit the blog post and republish. I might be able to save time and just copy over the blog post during the first go-around, from make serve.
      • I added Pagefind search to my blog.
      • I wrote about some of my workflows.
      • I started a /now page.
      • Oops: I forgot to check on Emacs Berlin and it turned out that the NAS timezone was set to GMT-5 instead of America/Toronto, so I scrambled to get it set up. I also got distracted while trying to figure out how to revoke the token the NAS was using so it wouldn't downscale automatically, so that might have wrapped up the meeting early. I set up cronjobs on xu4 for next time.

      Next week:

      • Continue to reduce fretting about homework.
      • Work through intermediate piano course in Simply Piano. Practise1 more songs, too.
      • Take a look at that inbox and start dusting things off.

      Blog posts

      Sketches

      Toots

      • eleventy-post-graph (toot) I used eleventy-post-graph to add a quick year visualization to my year pages (2025, 2024, …) and a visualization for the whole blog. Someday it might be nice to make it more accessible and figure out how I can link to the blog post(s) for that day.
      • From @johnrakestraw's On keeping a notebook (toot)

        “One thing that really fascinates me is how I'm reminded of events and readings that I'd completely forgotten – but, once reminded, I find that these things are once again in my mind. Perhaps I can say what I'm thinking more clearly — though I'm more than a little frustrated by having absolutely no memory of experiencing or reading something I describe in an entry written only a few years ago, I'm fascinated by how reading what I wrote has brought that experience back to mind rather vividly. Of course I'm reminded of what I described in the text that I'm now re-reading, but I can also remember other things associated with whatever it is that is described there. It's as though the small bit that I wrote and can now read is the key that unlocks a much larger trove of memory. Funny how the mind works.”

        I am also quite fuzzy about things that happened, and I'm glad I've got notes to help me sort of remember.

      • Added comment links to my RSS feed (toot) Nudged by A Walk Through My Digital Neighborhood // Take on Rules by @takeonrules and also my recent focus on having more conversations around blog post ideas (and sometimes the annoyance of finding someone's contact info), I added comment links to my RSS/Atom items (https://sachachua.com/blog/feed/index.xml and https://sachachua.com/blog/feed/atom/index.xml, and also all the categories have feeds generally at category/…/feed/index.xml). If I've set a Mastodon URL for the entry, it'll link to the Mastodon thread too. #11ty
      • Switching to Bigger Picture for the lightbox (toot) Lightbox: I replaced PhotoSwipe with Bigger Picture seems nice and flexible
      • Connections (toot) Following a link from https://manuelmoreale.com/pb-maya , I enjoyed this quote about blogging:

        Although, as well researched and as thoughtful as Houston might be there's a messiness at work here that I love; it is the true great quality of a blog. That permission to roam, to let your curiosity grab you by the lapel and hoist you across fifteen different subjects over the course of a single paragraph; blogging is pointing at things and falling in love.

      • Bull sharks and respiration (toot) My 2021 post on A list of sharks that are obligate ram ventilators continues to pop up every now and then. Someone had a question about whether bull sharks are obligate ram ventilators, so I did a little research and added whatever notes I could find there. I think maybe they aren't, although they're sometimes described as such? Not sure, maybe someone can chime in. =)
      • Programmable Notes (toot) Oooh, it could be fun to trawl through these for ideas for things to port over to Emacs.

        The Smartblocks plug-in for Roam Research is the system I personally use to build these types of workflows. It offers a set of triggers, variables, and commands you can chain together into fairly readable statements like: <%SET:topOfMindToday,<%INPUT:What's on your mind today?%>%> or <%RANDOMBLOCKFROM:Writing Ideas%>.

        Even with limited programming knowledge, many people in the community have been able to fashion their own Smartblock flows. Plenty of them have published their workflows to the community Github for others to use.

        Smartblock flows on Github

      • The promise and distraction of productivity and note-taking systems (toot)

        Books are maps to territories that are completely internal to the reader. By focusing so heavily on extracting the surface symbology of the map itself, these process-heavy note-takers risk losing sight of the territory. A book's territory is the reasoning and argument that the book presents to you as a path you take through your own psyche. The goal isn't to remember everything the book contains. Remembering a book's contents is useless. The book exists to contain what it contains. If the contents are important, you keep a copy of it for you to look things up again.

        But that isn't the point of reading. The purpose of reading is to be changed. Sometimes the change is trivial and temporary – a piece of fiction that brings some joy in your life. Sometimes the change is profound – a shift in your perspective on life. “Action items” from a book are external and forcing yourself to follow through on them is exhausting.

      • Added Pagefind search (toot) I'm also experimenting with using Pagefind to provide search for my static site using client-side Javascript. It currently analyzes 10934 files and indexes 8183 pages (87272 words) in 40 seconds. The data is 125MB, but a search for, say, "sketchnote" transfers only 280KB, so that's pretty good. I think I'm adding the date properly and I know I can set that as the default sort, but I haven't yet figured out how to make it possible for people to sort by either relevance or date as they want. I also want to eventually format the search results to include the date. Maybe Building a Pagefind UI – dee.underscore.world will be useful.
      Time
      Category The other week % Last week % Diff % h/wk Diff h/wk
      Unpaid work 3.3 4.7 1.4 7.9 2.4
      Discretionary - Productive 19.2 20.1 0.9 33.7 1.5
      Personal 9.4 9.9 0.5 16.6 0.8
      Discretionary - Play 1.2 1.6 0.4 2.7 0.7
      Discretionary - Family 0.0 0.3 0.3 0.5 0.5
      A- 31.6 31.5 -0.1 53.0 -0.1
      Business 1.7 0.8 -0.9 1.3 -1.5
      Sleep 33.7 31.1 -2.5 52.3 -4.3

      You can comment on Mastodon or e-mail me at sacha@sachachua.com.

    2. 🔗 sacha chua :: living an awesome life Org Mode: Cutting the current list item (including nested lists) with a speed command rss

      Defining shortcuts in org-speed-commands is handy because you can use these single-key shortcuts at the beginning of a subtree. With a little modification, they'll also work at the beginning of list items.

      (defun my-org-use-speed-commands-for-headings-and-lists ()
        "Activate speed commands on list items too."
        (or (and (looking-at org-outline-regexp) (looking-back "^\**" nil))
            (save-excursion (and (looking-at (org-item-re)) (looking-back "^[ \t]*" nil)))))
      (setq org-use-speed-commands 'my-org-use-speed-commands-for-headings-and-lists)
      

      I want k to be an org-speed-commands that cuts the current subtree or list item. This is handy when I'm cleaning up the Mastodon toots in my weekly review or getting rid of outline items that I no longer need. By default, k is mapped to org-cut-subtree, but it's easy to override.

      (defun my-org-cut-subtree-or-list-item (&optional n)
        "Cut current subtree or list item."
        (cond
         ((and (looking-at org-outline-regexp) (looking-back "^\**" nil))
          (org-cut-subtree n))
         ((looking-at (org-item-re))
          (kill-region (org-beginning-of-item) (org-end-of-item)))))
      (with-eval-after-load 'org
        (setf (alist-get "k" org-speed-commands nil nil #'string=)
              #'my-org-cut-subtree-or-list-item))
      

      So now, if I put my cursor before "1." below and press k:

      - this
        1. is a
          - nested
        2. list
      - with levels
      

      it will turn into:

      • this
        1. list
      • with levels

      You can find out a little more about Org Mode speed commands in the Org manual: (info "(org) Speed Keys").

      This is part of my Emacs configuration.

      You can e-mail me at sacha@sachachua.com.

    3. 🔗 @cxiao@infosec.exchange damn quoting is actually kinda useful, can't wait for real quote toots 🥲 mastodon

      damn quoting is actually kinda useful, can't wait for real quote toots 🥲

    4. 🔗 @cxiao@infosec.exchange [https://yaledailynews.com/blog/2025/03/27/three-prominent-yale-professors- mastodon
    5. 🔗 @cxiao@infosec.exchange [https://rss-parrot.net/u/objective- mastodon
    6. 🔗 @cxiao@infosec.exchange This was my favourite talk from mastodon

      This was my favourite talk from @REverseConf! @mahaloz made an intimidating (to me) topic really accessible, and I feel like I now have a better understanding of the decompilers I use every day. He is a great presenter too :D Would recommend checking it out!

      #reverseengineering #decompilation #infosec https://infosec.exchange/@REverseConf/114241453480617211

  4. March 28, 2025
    1. 🔗 hyprwm/Hyprland v0.48.1 release

      This is a bugfix release with some patches cherry-picked from main on top of 0.48.0.

      Fixes backported

      • renderer: Simplify and fix hdr metadata setting
      • seat: avoid sending null surfaces in leave/enter events
      • xwl: don't close the fd too early
      • groupbar: apply scaling factor to text
      • pass: remove unused timeline in texpass
      • groupbar: round boxes
      • groupbar: include clipBox in opaque calculations
      • opengl: don't attempt to compile cm on gles3.0
      • surfacestate: track and apply updated state
      • internal: fix minor ubsan errors
      • workspaces: minor fixes to persistence
      • surfacestate: reset buffer bit before applying to current
      • core: don't damage the entire surface every frame
      • xwayland: cleanup server startup and fds

      Special thanks

      Special thanks to these gigachads for donating $$$ to help the project run:

      Top Supporters:

      Azelphur, arc-nix, ExBhal, SomeMediocreGamer, Robin B., yyyyyyyan, taigrr, Amaan Q., Xoores, Jas Singh, Theory_Lukas, JanRi3D, ari-cake, alukortti, RaymondLC92, MasterHowToLearn, johndoe42, Abdulaziz Al-Khater, AuHunter

      New Monthly Supporters:

      Brad S, tidal608, Wateir, Firstpick(FirstPick), xyrd, realivlis, DeepBlue416, omniprezenze, azunades, aljoshare, Felix, danksa, Litheos, theailer

      One-time Donators:

      SymphonySimper, FlorentL, Birbirl, MeaTLoTioN, elia, mearkat7, Darmock, KD, Yehoward, nyatta, Urbinholt, InTerFace, Marcos92, Rei (os.rei), 6thScythe, sayykii, HowlVenger, Massis, Somebody, 46620, skk9, Jeffrey, Hari, IgorJ, neriss, Sleroq, Insomnes, Stefano, AJ, Troy, JNC, Gery, Dafitt, Stefan Ernst, quake, lharlanx, pscschn, Bex Jonathan, AliAhmad02, KomariSpaghetti, wjyzxcv, Daniel, Zoltar358, Airor 987, CBeerta, lcassa, Guy incognito , nobody, m3hransh, Lunics, GeoffC, Tamas Tancos, mikelpint, Nathan Lepori, fxj9a, Volodymyr Shkvarok, Haltesh, omnicroissant, grmontpetit, jw, CheeseHunter117, Hunter Wesson, eternal, ddubs, noname, bones, pixel <3, eltwig, Jose, NN, Mikol, Pekka, Andi, Treeniks, derethil, dfseifert, munsman, Cespen, jlevesy, Bill Fucking Nye, Kumungus, crappy, Alin742, Nicholas Roth, sijink, alba4k, Barry, MK, Yasen

      Full Changelog : v0.48.0...v0.48.1

    2. 🔗 @binaryninja@infosec.exchange Early stream this morning! 10am ET, let's see how much more progress we can mastodon

      Early stream this morning! 10am ET, let's see how much more progress we can make on the new architecture and file format:

      https://www.youtube.com/@vector35/live

    3. 🔗 jank blog Can jank beat Clojure's error reporting? rss

      Hey folks! I've spent the past quarter working on jank's error messages. I've focused on reaching parity with Clojure's error reporting and improving upon it where possible. This has been my first quarter spent working on jank full-time and I've been so excited to sit at my desk every morning and get hacking. Thank you to all of my sponsors and supporters! You help make this work possible.

    4. 🔗 tonsky.me Talk: Clojure workflow with Sublime Text @ SciCloj rss

      A deep overview of Clojure Sublimed, Socket REPL, Sublime Executor, custom color scheme, clj-reload and Clojure+.
      We discuss many usability choices, implementation details, and broader observations and insights regarding Clojure editors and tooling in general.