- ā
- ā
- The Awful German Language
- qwen3
- An opinionated guide on how to reverse engineer software, part 1
- Home ā David Crespo
- LLM-only RAG for small corpora ā David Crespo
- May 17, 2025
-
š astral-sh/uv 0.7.5 release
Release Notes
Bug fixes
- Support case-sensitive module discovery in the build backend (#13468)
- Bump Simple cache bucket to v16 (#13498)
- Don't error when the script is too short for the buffer (#13488)
- Add missing word in "script not supported" error (#13483)
Install uv 0.7.5
Install prebuilt binaries via shell script
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/astral-sh/uv/releases/download/0.7.5/uv-installer.sh | sh
Install prebuilt binaries via powershell script
powershell -ExecutionPolicy Bypass -c "irm https://github.com/astral-sh/uv/releases/download/0.7.5/uv-installer.ps1 | iex"
Download uv 0.7.5
File | Platform | Checksum
---|---|---
uv-aarch64-apple-darwin.tar.gz | Apple Silicon macOS | checksum
uv-x86_64-apple-darwin.tar.gz | Intel macOS | checksum
uv-aarch64-pc-windows-msvc.zip | ARM64 Windows | checksum
uv-i686-pc-windows-msvc.zip | x86 Windows | checksum
uv-x86_64-pc-windows-msvc.zip | x64 Windows | checksum
uv-aarch64-unknown-linux-gnu.tar.gz | ARM64 Linux | checksum
uv-i686-unknown-linux-gnu.tar.gz | x86 Linux | checksum
uv-powerpc64-unknown-linux-gnu.tar.gz | PPC64 Linux | checksum
uv-powerpc64le-unknown-linux-gnu.tar.gz | PPC64LE Linux | checksum
uv-s390x-unknown-linux-gnu.tar.gz | S390x Linux | checksum
uv-x86_64-unknown-linux-gnu.tar.gz | x64 Linux | checksum
uv-armv7-unknown-linux-gnueabihf.tar.gz | ARMv7 Linux | checksum
uv-aarch64-unknown-linux-musl.tar.gz | ARM64 MUSL Linux | checksum
uv-i686-unknown-linux-musl.tar.gz | x86 MUSL Linux | checksum
uv-x86_64-unknown-linux-musl.tar.gz | x64 MUSL Linux | checksum
uv-arm-unknown-linux-musleabihf.tar.gz | ARMv6 MUSL Linux (Hardfloat) | checksum
uv-armv7-unknown-linux-musleabihf.tar.gz | ARMv7 MUSL Linux | checksum
-
- May 16, 2025
-
š syncthing/syncthing v2.0.0-rc.15 release
What's Changed
- fix(config): deep copy configuration defaults (fixes #9916) by @hazemKrimi in #10101
- fix(config): mark audit log options as needing restart (fixes #10099) by @marbens-arch in #10100
- fix(model): correct bufferpool handling; simplify by @calmh in #10113
Full Changelog :
v2.0.0-rc.14...v2.0.0-rc.15
-
š Evan Schwartz New Life Hack: Using LLMs to Generate Constraint Solver Programs for Personal Logistics Tasks rss
I enjoy doing escape rooms and was planning to do a couple of them with a group of friends this weekend. The very minor and not-very-important challenge, however, was that I couldn't figure out how to assign friends to rooms. I want to do at least one room with each person, different people are arriving and leaving at different times, and there are only so many time slots. Both Claude 3.7 Sonnet and ChatGPT o3 tried and failed to figure out a workable solution given my constraints. However, after asking the LLMs to generate a constraint solver program and massaging the constraints a bit, I was able to find a solution.
TL;DR: If you need help juggling between a bunch of constraints for some personal logistical task, try asking an LLM to translate your requirements into a constraint solver program. You'll quickly find if there exists a solution and, if not, you can edit the constraints to find the best approach.
Background
The escape room location we're going to is PuzzleConnect in New Jersey. They have 3 rooms that are all Overwhelmingly Positively rated on Morty, which practically guarantees that they're going to be good.
Side note on Morty: Escape rooms as a category are fun but very hit-or-miss. Knowing which rooms are going to be great and which aren't is a game changer because you can skip the duds. Morty is an app specifically for reviewing escape rooms, and it shows you how many rooms each reviewer has played. If someone who has gone out of their way to play 100+ escape rooms says a room is great, you can probably take their word for it. If you happen to be in the Netherlands, Belgium, or Luxembourg, EscapeTalk.nl is also fantastic.
Side side note: I'm not affiliated with Morty or EscapeTalk in any way. I just appreciate what they do.
Constraints
- We have 8 players. I'll just identify them as A, B, D, E, J, P, S, and T. Unsurprisingly, I'm E.
- The three escape rooms are Candy, Christmas, and Temple themed.
- Originally, we had 7 time slots (2, 2, and 3) to work with. PuzzleConnect, however, was flexible and let us add a few more.
- 5 players are arriving at 11am, 3 could only arrive after 12pm. 2 players need to leave by 3:30pm.
- I want to play at least one room with each person.
- Every room can technically take 2-5+ people, but it's more fun if you limit it to 2-3.
- Each room takes roughly 1 hour and no player can be in more than one room at a time.
- Most players wanted to play all 3 rooms, but a couple were flexible or preferred playing 2.
As you can see, these are a fair number of constraints to juggle. I started with pen and paper, then a spreadsheet, but quickly gave up and asked the LLMs.
Constraint Solvers
Constraint solvers are programs in which you declaratively express constraints on the solution and the solver effectively tries to explore all possible states to find a valid solution. In practice, they use lots of clever methods and heuristics to efficiently explore the state space.
Unlike with imperative programming where you specify the steps for the program to take, under this paradigm you are just describing a valid final state. In addition to specifying hard constraints, you can also provide soft constraints that the solver will attempt to maximize.
I would not have thought to ask the LLMs to build a constraint solver program for me if not for Carolyn Zech's talk at this week's Rust NYC meetup about verifying the Rust standard library (see the announcement and the project on Github).
Escaping the Escape Room Assignment Puzzle
I have no experience writing programs for constraint solvers, but I was able to describe all of my constraints and ChatGPT was perfectly capable of translating those requirements into code.
In this case, we used Google's OR-Tools python package.
The first version was impossible to satisfy, but it worked once I added and moved around some time slots. After finding a workable solution, I think it's interesting to note how hard and soft constraints are expressed.
For example, each player can only play each room once:
# --------------------------------------------------------------------------- # 2.2 One session per theme for each player (no repeats) # --------------------------------------------------------------------------- for p in players: for theme in {s[1] for s in sessions}: same_theme = [i for i, s in enumerate(sessions) if s[1] == theme] if len(same_theme) > 1: model.Add(sum(x[p][i] for i in same_theme) <= 1)
Or, my desire to play at least one room with each person expressed as a soft constraint looks like this:
# Nice 3 ā E plays with everyone at least once if NICE_E_WITH_ALL: for q in players: if q == "E": continue together = [] for i in range(num_sessions): both = model.NewBoolVar(f"E_{q}_{i}") model.Add(both <= x["E"][i]) model.Add(both <= x[q][i]) model.Add(both >= x["E"][i] + x[q][i] - 1) together.append(both) meet_flag = model.NewBoolVar(f"E_with_{q}") model.AddMaxEquality(meet_flag, together) obj_terms.append(meet_flag)
You can find the full code below.
Conclusion
I'm not a big fan of balancing logistical tasks and constraints, but I do like finding optimal solutions. Getting an LLM to generate a constraint solver program for me to find optimal or feasible solutions is a nice new life hack that I'm sure I'll be using again.
ChatGPT can run Python code that it generates for you, but
ortools
isn't available as an import (for now!). It would be neat if OpenAI or Anthropic added it as a dependency and trained the models to reach for it when given some set of hard constraints or an optimization problem to solve. In the meantime, though, I'll just useuv run --with ortools ...
to optimize random life logistics.Code
from ortools.sat.python import cp_model from datetime import datetime # --------------------------------------------------------------------------- # Helper to convert "HH:MM" ā minutes since 00:00 # --------------------------------------------------------------------------- def t(hhmm: str) -> int: h, m = map(int, hhmm.split(":")) return h * 60 + m # --------------------------------------------------------------------------- # 1) INPUT DATA āāā tweak these dictionaries / lists as you like # --------------------------------------------------------------------------- players = ["A", "B", "D", "E", "J", "P", "S", "T"] # (min, max) rooms each player must play quota = { "A": (2, 2), "B": (2, 2), "S": (2, 3), "D": (3, 3), "E": (3, 3), "J": (3, 3), "P": (3, 3), "T": (3, 3), } # (label, theme, startātime, endātime) āĀ freely add / remove sessions sessions = [ ("Candy1", "Candy", t("11:00"), t("11:50")), ("Candy2", "Candy", t("12:00"), t("12:50")), ("Candy3", "Candy", t("13:00"), t("13:50")), ("Xmas1", "Christmas", t("11:00"), t("11:50")), ("Xmas2", "Christmas", t("12:00"), t("12:50")), ("Xmas3", "Christmas", t("13:00"), t("13:50")), ("Xmas4", "Christmas", t("14:00"), t("14:50")), ("Temple1", "Temple", t("11:00"), t("11:50")), ("Temple2", "Temple", t("12:00"), t("12:50")), ("Temple3", "Temple", t("13:00"), t("13:50")), ("Temple4", "Temple", t("14:00"), t("14:50")), # ("Temple5", "Temple", t("15:00"), t("15:50")), ] # Niceātoāhaves āĀ set False if you do not care NICE_EDT_T2 = True # E, D, T together in Temple2 NICE_AB_TOGETHER = True # A & B share at least one session NICE_E_WITH_ALL = True # E meets everyone at least once # Arrival / departure windows arrival = {p: t("11:00") for p in players} arrival.update({"A": t("12:15"), "B": t("12:15"), "S": t("12:15")}) depart = {p: t("23:59") for p in players} depart.update({"P": t("15:30"), "T": t("15:30")}) # Capacity limits (min only applies if the session is actually used) CAP_MIN = 2 CAP_MAX = 3 # --------------------------------------------------------------------------- # 2) CPāSAT MODEL # --------------------------------------------------------------------------- model = cp_model.CpModel() num_sessions = len(sessions) # x[p][i] == 1 ā player p attends session i x = { p: [model.NewBoolVar(f"x[{p},{i}]") for i in range(num_sessions)] for p in players } # y[i] == 1 ā session i is actually used (ā„1 player) y = [model.NewBoolVar(f"used[{i}]") for i in range(num_sessions)] # --------------------------------------------------------------------------- # 2.1 Quotas # --------------------------------------------------------------------------- for p in players: lo, hi = quota[p] model.Add(sum(x[p][i] for i in range(num_sessions)) >= lo) model.Add(sum(x[p][i] for i in range(num_sessions)) <= hi) # --------------------------------------------------------------------------- # 2.2 One session per theme for each player (no repeats) # --------------------------------------------------------------------------- for p in players: for theme in {s[1] for s in sessions}: same_theme = [i for i, s in enumerate(sessions) if s[1] == theme] if len(same_theme) > 1: model.Add(sum(x[p][i] for i in same_theme) <= 1) # --------------------------------------------------------------------------- # 2.3 Arrival / departure filtering # --------------------------------------------------------------------------- for p in players: for i, (_, _, start, end) in enumerate(sessions): if start < arrival[p] or end > depart[p]: model.Add(x[p][i] == 0) # --------------------------------------------------------------------------- # 2.4 No overlaps per player # --------------------------------------------------------------------------- for p in players: for i in range(num_sessions): si, ei = sessions[i][2:4] for j in range(i + 1, num_sessions): sj, ej = sessions[j][2:4] if si < ej and sj < ei: # true overlap (shared minutes) model.Add(x[p][i] + x[p][j] <= 1) # --------------------------------------------------------------------------- # 2.5 Link "used" variable with player attendance + capacity bounds # --------------------------------------------------------------------------- for i in range(num_sessions): total_here = sum(x[p][i] for p in players) model.Add(total_here >= CAP_MIN).OnlyEnforceIf(y[i]) model.Add(total_here <= CAP_MAX) # If anyone attends ā session is used for p in players: model.Add(x[p][i] <= y[i]) # At least 1 attend => used=1 (bigāM style) model.Add(total_here >= 1).OnlyEnforceIf(y[i]) model.Add(total_here <= 0).OnlyEnforceIf(y[i].Not()) # --------------------------------------------------------------------------- # 3) SOFT CONSTRAINTS / OBJECTIVE FUNCTION # --------------------------------------------------------------------------- obj_terms = [] # Nice 1 ā E, D, T together in Temple2 if NICE_EDT_T2: idx_t2 = next(i for i, s in enumerate(sessions) if s[0] == "Temple2") edt = model.NewBoolVar("edt_in_T2") model.Add(x["E"][idx_t2] + x["D"][idx_t2] + x["T"][idx_t2] >= 3 * edt) model.Add(x["E"][idx_t2] + x["D"][idx_t2] + x["T"][idx_t2] <= edt + 2) obj_terms.append(edt) # Nice 2 ā at least one game with A & B together if NICE_AB_TOGETHER: ab_shared = model.NewBoolVar("A_B_together") ab_in_any = [] for i in range(num_sessions): s_var = model.NewBoolVar(f"AB_{i}") model.Add(s_var <= x["A"][i]) model.Add(s_var <= x["B"][i]) model.Add(s_var >= x["A"][i] + x["B"][i] - 1) ab_in_any.append(s_var) model.AddMaxEquality(ab_shared, ab_in_any) obj_terms.append(ab_shared) # Nice 3 ā E plays with everyone at least once if NICE_E_WITH_ALL: for q in players: if q == "E": continue together = [] for i in range(num_sessions): both = model.NewBoolVar(f"E_{q}_{i}") model.Add(both <= x["E"][i]) model.Add(both <= x[q][i]) model.Add(both >= x["E"][i] + x[q][i] - 1) together.append(both) meet_flag = model.NewBoolVar(f"E_with_{q}") model.AddMaxEquality(meet_flag, together) obj_terms.append(meet_flag) # Nice 4 ā at least one game with T & P together TP_shared = model.NewBoolVar("T_P_together") tp_in_any = [] for i in range(num_sessions): s_var = model.NewBoolVar(f"TP_{i}") model.Add(s_var <= x["T"][i]) model.Add(s_var <= x["P"][i]) model.Add(s_var >= x["T"][i] + x["P"][i] - 1) tp_in_any.append(s_var) model.AddMaxEquality(TP_shared, tp_in_any) obj_terms.append(TP_shared) # Maximise the total number of soft constraints satisfied if obj_terms: model.Maximize(sum(obj_terms)) # --------------------------------------------------------------------------- # 4) SOLVE & PRINT RESULT # --------------------------------------------------------------------------- solver = cp_model.CpSolver() solver.parameters.max_time_in_seconds = 60 # increase if needed status = solver.Solve(model) if status in (cp_model.OPTIMAL, cp_model.FEASIBLE): print("\nā Schedule found (softāscore =", solver.ObjectiveValue(), ")\n") def hm(m): return f"{m // 60:02d}:{m % 60:02d}" for i, (label, theme, start, end) in enumerate(sessions): ppl = [p for p in players if solver.Value(x[p][i])] if ppl: print(f"{theme:<10} {label:<7} {hm(start)}ā{hm(end)} : {', '.join(ppl)}") else: print("\nā No feasible schedule under the current hard constraints.")
-
š @binaryninja@infosec.exchange Our colleagues at Aarno Labs have published a follow-up addressing mastodon
Our colleagues at Aarno Labs have published a follow-up addressing vulnerabilities in production stripped binaries, even if the vendor doesn't fix it themselves. This is related to our collaboration through the ARPA-H DIGIHEALS program. More details can be found here: https://www.aarno- labs.com/blog/post/high-assurance-remediation-of- cve-2024-12248/
-
š Confessions of a Code Addict Building (and Breaking) Your First X86 Assembly Program rss
Introduction
Recap from the previous article: In the last couple of articles, we built a simple but complete mental model of how a basic computer executes instructions. We explored how an ALU performs arithmetic operations, how registers serve as fast-access storage, and how the control unit fetches and executes instructions stored in memory. We also introduced the structure of assembly programs, the use of labels, and how data and instructions are laid out in different sections like
.text
and.data
. That article provided the conceptual foundation we need to now dive into real X86-64 assembly code.This article is part of my series on the basics of X86-64 assembly programming. Until now, we have been working mostly with ideas. We talked about what it means for a computer to execute a program, how computation is carried out by hardware, and how memory is laid out to store data and instructions. We have seen snippets of assembly here and there, but we haven't written a full program yet. That changes now.
In this article, we will write our first complete (well, almost) assembly program. It won't do anything exciting, but that's the point. Like "Hello, world" in high-level languages, this program is just a vehicle to help us understand the mechanics of how an assembly program is written, assembled, linked, and executed. Along the way, we'll revisit some of the concepts we've discussed before and see how they manifest in actual code.
If you haven't read the previous articles in this series, here's what you have missed:
This article is part of a paid subscriber series.
If you're enjoying the content, please consider upgrading to a paid plan to unlock the rest of this series. Paid subscribers also get access to recordings of past live sessions, early and discounted access to courses and books, and more.Alternatively, you can purchase an ebook version of this series. (If you're already a paid subscriber, email me for a discounted link.)
-
- May 15, 2025
-
š oxigraph/oxigraph v0.4.10 release
- New
oxjsonld
crate with JSON-LD 1.0 parsing and serialization. oxigraph
binary is now distributed on Pypi.- small improvements in parsers and serializers
- New
-
š astral-sh/uv 0.7.4 release
Release Notes
Enhancements
- Add more context to external errors (#13351)
- Align indentation of long arguments (#13394)
- Preserve order of dependencies which are sorted naively (#13334)
- Align progress bars by largest name length (#13266)
- Reinstall local packages in
uv add
(#13462) - Rename
--raw-sources
to--raw
(#13348) - Show 'Downgraded' when
self update
is used to install an older version (#13340) - Suggest
uv self update
if required uv version is newer (#13305) - Add 3.14 beta images to uv Docker images (#13390)
- Add comma after "i.e." in Conda environment error (#13423)
- Be more precise in unpinned packages warning (#13426)
- Fix detection of sorted dependencies when include-group is used (#13354)
- Fix display of HTTP responses in trace logs for retry of errors (#13339)
- Log skip reasons during Python installation key interpreter match checks (#13472)
- Redact credentials when displaying URLs (#13333)
Bug fixes
- Avoid erroring on
pylock.toml
dependency entries (#13384) - Avoid panics for cannot-be-a-base URLs (#13406)
- Ensure cached realm credentials are applied if no password is found for index URL (#13463)
- Fix
.tgz
parsing to respect true extension (#13382) - Fix double self-dependency (#13366)
- Reject
pylock.toml
inuv add -r
(#13421) - Retain dot-separated wheel tags during cache prune (#13379)
- Retain trailing comments after PEP 723 metadata block (#13460)
Documentation
Preview features
- Build backend: Normalize glob paths (#13465)
Install uv 0.7.4
Install prebuilt binaries via shell script
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/astral-sh/uv/releases/download/0.7.4/uv-installer.sh | sh
Install prebuilt binaries via powershell script
powershell -ExecutionPolicy Bypass -c "irm https://github.com/astral-sh/uv/releases/download/0.7.4/uv-installer.ps1 | iex"
Download uv 0.7.4
File | Platform | Checksum
---|---|---
uv-aarch64-apple-darwin.tar.gz | Apple Silicon macOS | checksum
uv-x86_64-apple-darwin.tar.gz | Intel macOS | checksum
uv-aarch64-pc-windows-msvc.zip | ARM64 Windows | checksum
uv-i686-pc-windows-msvc.zip | x86 Windows | checksum
uv-x86_64-pc-windows-msvc.zip | x64 Windows | checksum
uv-aarch64-unknown-linux-gnu.tar.gz | ARM64 Linux | checksum
uv-i686-unknown-linux-gnu.tar.gz | x86 Linux | checksum
uv-powerpc64-unknown-linux-gnu.tar.gz | PPC64 Linux | checksum
uv-powerpc64le-unknown-linux-gnu.tar.gz | PPC64LE Linux | checksum
uv-s390x-unknown-linux-gnu.tar.gz | S390x Linux | checksum
uv-x86_64-unknown-linux-gnu.tar.gz | x64 Linux | checksum
uv-armv7-unknown-linux-gnueabihf.tar.gz | ARMv7 Linux | checksum
uv-aarch64-unknown-linux-musl.tar.gz | ARM64 MUSL Linux | checksum
uv-i686-unknown-linux-musl.tar.gz | x86 MUSL Linux | checksum
uv-x86_64-unknown-linux-musl.tar.gz | x64 MUSL Linux | checksum
uv-arm-unknown-linux-musleabihf.tar.gz | ARMv6 MUSL Linux (Hardfloat) | checksum
uv-armv7-unknown-linux-musleabihf.tar.gz | ARMv7 MUSL Linux | checksum -
š News Minimalist Google's AI designs algorithms outperforming experts + 2 more stories rss
Today ChatGPT read 29481 top news stories. After removing previously covered events, there are 3 articles with a significance score over 5.9.
[6.6] Google DeepMindās AI creates algorithms exceeding human expertise āwired.com(+12)
Google DeepMind's AlphaEvolve AI has designed novel algorithms exceeding human capabilities in areas like datacenter scheduling and chip design. This showcases potential for AI-driven innovation.
AlphaEvolve combines Gemini's coding skills with evolutionary methods to create more efficient algorithms. It surpassed a 56-year-old matrix calculation method and improved algorithms for optimizing large language models.
Experts note AlphaEvolve's advancements apply to algorithms involving searching potential answers. However, it demonstrates AI's potential to generate novel ideas through experimentation, rather than simply remixing existing knowledge.
[5.9] China cut emissions despite increased power demand āinternazionale.it(Italian) (+1)
China's carbon emissions decreased in the first quarter of 2025, a significant shift driven by renewable energy growth.
This reduction occurred despite a 2.5% increase in electricity demand. The decline, 1.6% year-on-year, was attributed to increased wind, solar, and nuclear power generation.
China aims to peak emissions by 2030 and achieve carbon neutrality by 2060, investing heavily in renewables. However, the nation still relies heavily on coal and lags on its carbon intensity reduction goals.
[5.9] US seeks Syria's normalization but demands key concessions āirishtimes.com(+311)
The United States is moving to normalize relations with Syria, lifting decades of sanctions after a meeting between US and Syrian leaders. This follows the ousting of the Assad regime.
The US is demanding Syria sever ties with Palestinian militant groups, protect minority rights, and address weapons of mass destruction. These conditions are prerequisites for full normalization and further investment.
This shift follows years of conflict and sanctions, including a $10 million bounty on the current Syrian leader. The US also seeks Syria's support in combating ISIS and resolving the status of foreign fighters.
Highly covered news with significance over 5.5
[5.6] Dutch research says universe will die sooner than expected ā cbsnews.com (+12)
[5.7] Macron considers deploying French nuclear weapons in Europe ā thehindu.com (+65)
[5.6] Worldwide internal displacement reached record high in 2024 ā letemps.ch (French) (+6)
[5.6] US sells $142B arms package to Saudi Arabia ā thehindu.com (+122)
[5.6] Pakistan shot down Indian planes with Chinese jets ā theguardian.com (+26)
[5.6] Claw print fossils found in Australia rewrite story of amniotes by 40 million years ā theguardian.com (+13)
[5.6] UK, Germany will develop a 2,000km strike weapon ā ndtv.com (+10)
[5.6] EV sales will exceed 25% globally in 2025 ā iea.org (+5)
Thanks for reading!
ā Vadim
You can create your own personal newsletter like this with premium.
-
š The Pragmatic Engineer Stack overflow is almost dead rss
Originally published in The Pragmatic Engineer Newsletter .
Four months ago, we asked Are LLMs making Stack Overflow irrelevant? Data at the time suggested that the answer is likely "yes:"
Number of questions asked per month on StackOverflow. Data source:this Gist
Since then, things at Stack Overflow went from bad to worse. The volume of questions asked has nearly dried up, new data shows:
Questions have slumped to levels last seen when Stack Overflow launched in 2009. Source: Stack Overflow Data Explorer (SEDE) / Marc Gravell on X
This graph was shared by Marc Gravell, a top 10 all-time contributor to Stack Overflow. Let's look closer at the data:
Decline started around 2014
A few things stand out:
- 2014: questions started to decline, which was also when Stack Overflow significantly improved moderator efficiency. From then, questions were closed faster, many more were closed, and "low quality" questions were removed more efficiently. This tallies with my memory of feeling that site moderators had gone on a power trip by closing legitimate questions. I stopped asking questions around this time because the site felt unwelcome.
- March 2020: a big jump in traffic due to pandemic-induced lockdowns and forced remote working. Instead of asking colleagues, devs Googled and visited Stack Overflow for help
- June 2020: questions start to decline, faster than before. Even though we did not know at the time, this was stilll two years from ChatGPT launching!
- June 2021: Stack Overflow sold for $1.8B to private equity investor, Prosus. In hindsight, the founders - Jeff Atwood and Joel Spolsky - sold with near-perfect timing, before terminal decline.
- November 2022: as soon as ChatGPT came out, the number of questions asked declined rapidly. ChatGPT is faster and it's trained on StackOverflow data, so the quality of answers is similar. Plus, ChatGPT is polite and answers all questions, in contrast to StackOverflow moderators.
- May 2025: the number of monthly questions is as low as when Stack Overflow launched in 2009.
In January, I asked if LLMs are making Stack Overflow irrelevant. We now have an answer, and sadly, it's a "yes." The question seems to be when Stack Overflow will wind down operations, or the owner sells the site for comparative pennies, not _if _it will happen.
Even without LLMs, it's possible StackOverflow would have eventually faded into irrelevance - perhaps driven by moderation policy changes or something else that started in 2014. LLMs have certainly accelerated its fall. It's a true shame for a site that helped so many developers get "unstuck" - while successfully gamifying helping other developers on the internet in the early 2010s.
I'll certainly miss having a space on the internet to ask questions and receive help - not from an AI, but from fellow, human developers. While Stack Overflow's days are likely numbered: I'm sure we'll see spaces where developers hang out and help each other continue to be popular - whether they are in the form of Discord servers, WhatsApp or Telegram groups, or something else.
Update on 15 May: updated the last two paragraphs to make it a more positive outlook. I really did love StackOverflow from when it launched, and it made a big and positive difference in my professional growth in those early years - I still remember the pride of getting my first upvote on first a question, and eventually on more and more answers as well. Too bad that all good things come to an end. Thanks to Andrew for his thoughtful note.
This was one out of five topics from latest The Pulse issue. The full issue additionally covers:
- Industry pulse. Google's CEO doing customer support, coding model recommendations from Cursor, AI dev tools company valuations soar, OpenAI still a nonprofit - but with more clarity on stock, and will we get an answer to whether copyrighted materials can be used to train AI models?
- Could big job cuts at Microsoft become more regular? 6,000 people (about 3% of staff) let go at Microsoft. Based on the company's history, mass layoffs happen more than in the past. Satya Nadella is an empathetic leader, but also doesn't shy away from axing jobs.
- Google: high performers get more bonus, low performers get less. Not exactly a controversial change, but another example of the search giant becoming similar to other tech companies. Places like Uber have implemented this approach before.
- Notes on rolling out Cursor and Claude Code. A 40-person dev team at Workforce.com with a Ruby on Rails codebase started to use AI tools and agents. Results so far are pretty good: productivity gains are real if modest, and there's learnings on how to best use them from cofounder, Alex Ghiculescu.
-
š @binaryninja@infosec.exchange Binary Ninja 5.0 adds support for new architectures: MIPS3 in all paid mastodon
Binary Ninja 5.0 adds support for new architectures: MIPS3 in all paid editions and C-SKY ABIv1 in Ultimate. Bonus: C-SKY ABIv1 is nearly identical to M-CORE, so Ultimate users get both with one update. https://binary.ninja/2025/04/23/5.0-gallifrey.html#new- architectures
-
š Willi Ballenthin IDA Pro plugins on GitHub rss
There are so many interesting IDA Pro plugins out there, yet I have trouble discovering them, particularly outside of the annual plugin contest. So I wrote this little page to monitor GitHub for IDA Pro plugins. It updates periodically, so check back (and sort by created/pushed date) to see recent activity. Enjoy! how? (click to expand) This script periodically searches GitHub for "def PLUGIN_ENTRY()" language:Python and stores the results in a SQLite database.
-
š Anton Zhiyanov Am I online? rss
Recently, I was working on an application that needed to know if it was connected to the internet. A common way to do this is to ping DNS servers like
8.8.8.8
(Google) or1.1.1.1
(Cloudflare). However, this uses the ICMP protocol (which only checks for basic network connectivity), while I wanted to exercise the full stack used by real HTTP clients: DNS, TCP, and HTTP.Generate 204
After some research, I found this URL that Google itself seems to use in Chrome to check for connectivity:
http://google.com/generate_204 https://google.com/generate_204
The URL returns a
204 No Content
HTTP status (a successful response without a body). It's super fast, relies only on the core Google infrastructure (so it's unlikely to fail), and supports both HTTP and HTTPS. So I went with it, and it turned out to be sufficient for my needs.There are also
http://www.gstatic.com/generate_204
andhttp://clients3.google.com/generate_204
. As far as I can tell, they are served by the same backend as the one on google.com.Other companies provide similar URLs to check for connectivity:
http://cp.cloudflare.com/generate_204
(Cloudflare)http://edge-http.microsoft.com/captiveportal/generate_204
(Microsoft)http://connectivity-check.ubuntu.com
(Ubuntu)http://connect.rom.miui.com/generate_204
(Xiaomi)
200 OK
Some companies provide
200 OK
endpoints instead of204 No Content
:http://spectrum.s3.amazonaws.com/kindle-wifi/wifistub.html
(Amazon)http://captive.apple.com/hotspot-detect.html
(Apple)http://network-test.debian.org/nm
(Debian)http://nmcheck.gnome.org/check_network_status.txt
(Gnome)http://www.msftncsi.com/ncsi.txt
(Microsoft)http://detectportal.firefox.com/success.txt
(Mozilla)
They are all reasonably fast and return compact responses.
Implementation
Finally, here's a simple internet connectivity check implemented in several programming languages. It uses Google's URL, but you can replace it with any of the others listed above.
Python:
import datetime as dt import http.client def is_online(timeout: dt.timedelta = dt.timedelta(seconds=1)) -> bool: """Checks if there is an internet connection.""" try: conn = http.client.HTTPConnection( "google.com", timeout=timeout.total_seconds(), ) conn.request("GET", "/generate_204") response = conn.getresponse() return response.status in (200, 204) except Exception: return False finally: conn.close()
JavaScript:
// isOnline checks if there is an internet connection. async function isOnline(timeoutMs) { try { const url = "http://google.com/generate_204"; const response = await fetch(url, { signal: AbortSignal.timeout(timeoutMs ?? 1000), }); return response.status === 200 || response.status === 204; } catch (error) { return false; } }
Shell:
#!/usr/bin/env sh # Checks if there is an internet connection. is_online() { local url="http://google.com/generate_204" local timeout=${1:-1} local response=$( curl \ --output /dev/null \ --write-out "%{http_code}" \ --max-time "$timeout" \ --silent \ "$url" ) if [ "$response" = "200" ] || [ "$response" = "204" ]; then return 0 else return 1 fi }
Go:
package main import ( "context" "net/http" ) // isOnline checks if there is an internet connection. func isOnline(ctx context.Context) bool { const url = "http://google.com/generate_204" req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil) if err != nil { return false } resp, err := http.DefaultClient.Do(req) if err != nil { return false } defer resp.Body.Close() return resp.StatusCode == http.StatusOK || resp.StatusCode == http.StatusNoContent }
Final thoughts
I'm not a big fan of Google, but I think it's nice of them to provide a publicly available endpoint to check internet connectivity. The same goes for Cloudflare and the other companies mentioned in this post.
Do you know of other similar endpoints? Let me know! @ohmypy (Twitter/X) or @antonz.org (Bluesky)
-
š @cxiao@infosec.exchange If you are attending my "Reconstructing Rust Types: A Practical Guide for mastodon
If you are attending my "Reconstructing Rust Types: A Practical Guide for Reverse Engineers" workshop at @NorthSec tomorrow, I've prepared some supplementary files for the workshop here, which you may wish to take a look at beforehand! https://github.com/cxiao/reconstructing-rust-types-workshop- northsec-2025
See you tomorrow (Thursday May 15) at 1300 EDT (UTC-4), in either the Workshop 2 track, in Salle de la Commune, or on the stream at https://www.youtube.com/watch?v=UwJgS32Q6As&list=PLuUtcRxSUZUrW9scJZqhbiuTBwZBJ- Qic&index=8 !
#rustlang #ReverseEngineering #MalwareAnalysis #NorthSec #infosec #reversing
-
š Rust Blog Announcing Rust 1.87.0 and ten years of Rust! rss
Live from the 10 Years of Rust celebration in Utrecht, Netherlands, the Rust team is happy to announce a new version of Rust, 1.87.0!
Today's release day happens to fall exactly on the 10 year anniversary of Rust 1.0!
Thank you to the myriad contributors who have worked on Rust, past and present. Here's to many more decades of Rust! š
As usual, the new version includes all the changes that have been part of the beta version in the past six weeks, following the consistent regular release cycle that we have followed since Rust 1.0.
If you have a previous version of Rust installed via
rustup
, you can get 1.87.0 with:$ rustup update stable
If you don't have it already, you can get
rustup
from the appropriate page on our website, and check out the detailed release notes for 1.87.0.If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (
rustup default beta
) or the nightly channel (rustup default nightly
). Please report any bugs you might come across What's in 1.87.0 stable
Anonymous pipes
1.87 adds access to anonymous pipes to the standard library. This includes integration with
std::process::Command
's input/output methods. For example, joining the stdout and stderr streams into one is now relatively straightforward, as shown below, while it used to require either extra threads or platform-specific functions.use std::process::Command; use std::io::Read; let (mut recv, send) = std::io::pipe()?; let mut command = Command::new("path/to/bin") // Both stdout and stderr will write to the same pipe, combining the two. .stdout(send.try_clone()?) .stderr(send) .spawn()?; let mut output = Vec::new(); recv.read_to_end(&mut output)?; // It's important that we read from the pipe before the process exits, to avoid // filling the OS buffers if the program emits too much output. assert!(command.wait()?.success());
[](https://blog.rust-lang.org/2025/05/15/Rust-1.87.0/#safe-architecture-
intrinsics) Safe architecture intrinsics
Most
std::arch
intrinsics that are unsafe only due to requiring target features to be enabled are now callable in safe code that has those features enabled. For example, the following toy program which implements summing an array using manual intrinsics can now use safe code for the core loop.#![forbid(unsafe_op_in_unsafe_fn)] use std::arch::x86_64::*; fn sum(slice: &[u32]) -> u32 { #[cfg(target_arch = "x86_64")] { if is_x86_feature_detected!("avx2") { // SAFETY: We have detected the feature is enabled at runtime, // so it's safe to call this function. return unsafe { sum_avx2(slice) }; } } slice.iter().sum() } #[target_feature(enable = "avx2")] #[cfg(target_arch = "x86_64")] fn sum_avx2(slice: &[u32]) -> u32 { // SAFETY: __m256i and u32 have the same validity. let (prefix, middle, tail) = unsafe { slice.align_to::<__m256i>() }; let mut sum = prefix.iter().sum::<u32>(); sum += tail.iter().sum::<u32>(); // Core loop is now fully safe code in 1.87, because the intrinsics require // matching target features (avx2) to the function definition. let mut base = _mm256_setzero_si256(); for e in middle.iter() { base = _mm256_add_epi32(base, *e); } // SAFETY: __m256i and u32 have the same validity. let base: [u32; 8] = unsafe { std::mem::transmute(base) }; sum += base.iter().sum::<u32>(); sum }
[](https://blog.rust-lang.org/2025/05/15/Rust-1.87.0/#asm-jumps-to-rust-
code)
asm!
jumps to Rust codeInline assembly (
asm!
) can now jump to labeled blocks within Rust code. This enables more flexible low-level programming, such as implementing optimized control flow in OS kernels or interacting with hardware more efficiently.- The
asm!
macro now supports a label operand, which acts as a jump target. - The label must be a block expression with a return type of
()
or!
. - The block executes when jumped to, and execution continues after the
asm!
block. -
Using output and label operands in the same
asm!
invocation remains unstable.unsafe { asm!( "jmp {}", label { println!("Jumped from asm!"); } ); }
For more details, please consult the reference.
[](https://blog.rust-lang.org/2025/05/15/Rust-1.87.0/#precise-capturing-
use-in-impl-trait-in-trait-definitions) Precise capturing (
+ use<...>
) inimpl Trait
in trait definitionsThis release stabilizes specifying the specific captured generic types and lifetimes in trait definitions using
impl Trait
return types. This allows using this feature in trait definitions, expanding on the stabilization for non-trait functions in 1.82.Some example desugarings:
trait Foo { fn method<'a>(&'a self) -> impl Sized; // ... desugars to something like: type Implicit1<'a>: Sized; fn method_desugared<'a>(&'a self) -> Self::Implicit1<'a>; // ... whereas with precise capturing ... fn precise<'a>(&'a self) -> impl Sized + use<Self>; // ... desugars to something like: type Implicit2: Sized; fn precise_desugared<'a>(&'a self) -> Self::Implicit2; }
Stabilized APIs
Vec::extract_if
vec::ExtractIf
LinkedList::extract_if
linked_list::ExtractIf
<[T]>::split_off
<[T]>::split_off_mut
<[T]>::split_off_first
<[T]>::split_off_first_mut
<[T]>::split_off_last
<[T]>::split_off_last_mut
String::extend_from_within
os_str::Display
OsString::display
OsStr::display
io::pipe
io::PipeReader
io::PipeWriter
impl From<PipeReader> for OwnedHandle
impl From<PipeWriter> for OwnedHandle
impl From<PipeReader> for Stdio
impl From<PipeWriter> for Stdio
impl From<PipeReader> for OwnedFd
impl From<PipeWriter> for OwnedFd
Box<MaybeUninit<T>>::write
impl TryFrom<Vec<u8>> for String
<*const T>::offset_from_unsigned
<*const T>::byte_offset_from_unsigned
<*mut T>::offset_from_unsigned
<*mut T>::byte_offset_from_unsigned
NonNull::offset_from_unsigned
NonNull::byte_offset_from_unsigned
<uN>::cast_signed
NonZero::<uN>::cast_signed
.<iN>::cast_unsigned
.NonZero::<iN>::cast_unsigned
.<uN>::is_multiple_of
<uN>::unbounded_shl
<uN>::unbounded_shr
<iN>::unbounded_shl
<iN>::unbounded_shr
<iN>::midpoint
<str>::from_utf8
<str>::from_utf8_mut
<str>::from_utf8_unchecked
<str>::from_utf8_unchecked_mut
These previously stable APIs are now stable in const contexts:
core::str::from_utf8_mut
<[T]>::copy_from_slice
SocketAddr::set_ip
SocketAddr::set_port
,SocketAddrV4::set_ip
SocketAddrV4::set_port
,SocketAddrV6::set_ip
SocketAddrV6::set_port
SocketAddrV6::set_flowinfo
SocketAddrV6::set_scope_id
char::is_digit
char::is_whitespace
<[[T; N]]>::as_flattened
<[[T; N]]>::as_flattened_mut
String::into_bytes
String::as_str
String::capacity
String::as_bytes
String::len
String::is_empty
String::as_mut_str
String::as_mut_vec
Vec::as_ptr
Vec::as_slice
Vec::capacity
Vec::len
Vec::is_empty
Vec::as_mut_slice
Vec::as_mut_ptr
[](https://blog.rust-lang.org/2025/05/15/Rust-1.87.0/#i586-pc-windows-
msvc-target-removal)
i586-pc-windows-msvc
target removalThe Tier 2 target
i586-pc-windows-msvc
has been removed.i586-pc-windows- msvc
's difference to the much more popular Tier 1 targeti686-pc-windows- msvc
is thati586-pc-windows-msvc
does not require SSE2 instruction support. But Windows 10, the minimum required OS version of allwindows
targets (except thewin7
targets), requires SSE2 instructions itself.All users currently targeting
i586-pc-windows-msvc
should migrate toi686-pc-windows-msvc
.You can check the Major Change Proposal for more information.
Other
changes
Check out everything that changed in Rust, Cargo, and Clippy.
[](https://blog.rust-lang.org/2025/05/15/Rust-1.87.0/#contributors-
to-1-87-0) Contributors to 1.87.0
Many people came together to create Rust 1.87.0. We couldn't have done it without all of you. Thanks!
- The
-
š Console.dev newsletter Makepad rss
Description: Rust cross-platform UI framework.
What we like: Build application UIs for the web, mobile, and desktop. Supports web + Windows, Linux, macOS, iOS, Android. Uses a declarative DSL. Built using custom shaders and supports 2D, 3D, VR, AR. Plenty of examples. Can be live-styled without recompiling during development.
What we dislike: API docs arenāt available. Built-in UI components donāt feel native on each supported platform.
-
š Console.dev newsletter Aberdeen rss
Description: Reactive JS UI framework.
What we like: Build UIs in pure TS/JS without a virtual DOM. No build steps. Includes client-side routing, optimistic UI updates, component CSS, transition effects. Zero dependencies with a small (5kb) minimized size. Interactive tutorial docs.
What we dislike: Would benefit from more examples showing how to build real applications with data fetching.
-
š Llogiq on stuff Rust Life Improvement rss
This is the companion blog post to my eponymous Rustikon talk. The video recording and slides are also available now.
As is my tradition, I started with a musical number, this time replacing the lyrics to Deep Blue Somethingās āBreakfast at Tiffanyā, inspired by some recent discussions I got into:
You say that Rust is like a religion
the community is toxic
and you rather stay apart.
You say that C can be coded safely
that it is just a skill issue
still I know you just donāt care.R: And I say āwhat about memāry unsafety?ā
You say āI think I read something about it
and I recall I think that hackers quite like itā
And I say āwell thatās one thing you got!āIn C you are tasked with managing memāry
no help from the compiler
thereās so much that can go wrong?
So what now? The hackers are all over
your systems, running over
with CVEs galore.R: And I say āwhat aboutā¦ā
You say that Rust is a woke mind virus,
rustaceans are all queer furries
and you rather stay apart.
You say that C can be coded safely
that one just has to get gud
still I know you just donāt care.
Beta Channel
I started out the talk by encouraging people who use Rustup to try the Beta channel. Unlike stable, one can get six weeks of performance improvements and despite thirty-five point-releases since 1.0.0, most of those didnāt fix issues that many people happened upon.
Even when one wants to be sure to only update once the first point release is likely to be out, the median release appeared roughly two weeks after the point-zero one it was based on. Besides, if more people test the beta channel, its quality is also likely to improve. Itās a win-win situation.
Cargo
Cargo has a number of tricks up its sleeve that not everyone knows (so if you already do, feel free to skip ahead). E.g. there are a number of shortcuts:
$ cargo b # build $ cargo c # check $ cargo d # doc $ cargo d --open # opens docs in browser $ cargo t # test $ cargo r # run $ cargo rm $CRATE # remove
besides, if one has rust programs in the
examples/
subfolder, one can run them usingcargo r --example <name>
.I also noted that cargo can strip release binaries (but doesnāt by default), if you add the following to your projectās
Cargo.toml
:[profile.release] strip=true
Cargo: Configuration
Cargo not only looks for the
Cargo.toml
manifests, it also has its own project- or user-wide configuration:- project-wide:
.cargo/config.toml
- user-wide, UNIX-like (Linux, MacOS, etc.):
~/.cargo/config.toml
- user-wide, Windows:
%USERPROFILE%\.cargo\config.toml
The user configuration can be helpful toā¦
Add more shortcuts:
[alias] c = "clippy" do = "doc --open" ex = "run --example" rr = "run --release" bl = "bless" s = "semver-checks"
Have Rust compile code for the CPU in your computer (which lets the compiler use all its bells and whistles that normally are off limits in case you give that executable to a friend):
[build] rustflags = ["-C", "target-cpu=native"]
Have Rust compile your code into a zero-install relocatable static binary
[build] rustflags = ["-C", "target-feature=+crt-static"]
Use a shared
target
folder for all your Rust projects (This is very useful if you have multiple Rust projects with some overlap in dependencies, because build artifacts will be shared across projects, so they will only need to be compiled once, conserving both compile time & disk space):[build] target = "/home/<user>/.cargo/target"
Configure active lints for your project(s):
[lints.rust] missing_docs = "deny" unsafe_code = "forbid" [lints.clippy] dbg_macro = "warn"
There are sections for both Rust and Clippy. Speaking of which:
Clippy
This section has a few allow by default lints to try:
missing_panics_doc
,missing_errors_doc
,missing_safety_doc
If you have a function that looks like this:
pub unsafe fn unsafe_panicky_result(foo: Foo) -> Result<Bar, Error> { match unsafe { frobnicate(&foo) } { Foo::Amajig(bar) => Ok(bar), Foo::Fighters(_) => panic!("at the concert"); Foo::FieFoFum => Err(Error::GiantApproaching), } }`
The lints will require
# Errors
,# Panics
and# Safety
sections in the function docs, respectively:/// # Errors /// This function returns a `GiantApproaching` error on detecting giant noises /// /// # Panics /// The function might panic when called at a Foo Fighters concert /// /// # Safety /// Callers must uphold [`frobnicate`]'s invariants'
Thereās also an
unnecessary_safety_doc
lint that warns on# Safety
sections in docs of safe functions (which is likely a remnant of an unsafe function being made safe without removing the section from the docs, which might mislead users):/// # Safety /// /// This function is actually completely safe` pub fn actually_safe_fn() { todo!() }
The
multiple_crate_versions
will look at your dependencies and see if you have multiple versions of a dependency there. For example, if you have the following dependencies:- mycrate 0.1.0
- rand 0.9.0
- quickcheck 1.0.0
- rand 0.8.0
The
rand
crate will be compiled twice. Of course, thatās totally ok in many cases (especially if you know that your dependencies will catch up soon-ish), but if have bigger dependencies, your compile time may suffer. Worse, you may end up with incompatible types, as a type from one version of a crate isnāt necessarily compatible with the same type from another version.So if you have long compile times, or face error messages where a type seems to be not equal to itself, this lint may help you improve things.
The
non_std_lazy_statics
lint will help you to update your code if you still have a dependency onlazy_static
oronce_cell
for functionality that has been pulled intostd
. For example:// old school lazy statics lazy_static! { static ref FOO: Foo = Foo::new(); } static BAR: once_cell::sync::Lazy<Foo> = once_cell::sync::Lazy::new(Foo::new); // now in the standard library static BAZ: std::sync::LazyLock<Foo> = std::sync::LazyLock::new(Foo::new);
The
ref_option
andref_option_ref
lints should help you avoid using references on options as function arguments. Since anOption<&T>
is the same size as an&Option<T>
, itās a good idea to use the former to avoid the double reference.fn foo(opt_bar: &Option<Bar>) { todo!() } fn bar(foo: &Foo) -> &Option<&Bar> { todo!() } // use instead fn foo(opt_bar: Option<&Bar>) { todo!() } fn bar(foo: &Foo) -> Option<&Bar> { todo!() }
The
same_name_method
lint helps you avoid any ambiguities with would later require a turbo fish to resolve.struct I; impl I { fn into_iter(self) -> Iter { Iter } } impl IntoIterator for I { fn into_iter(self) -> Iter { Iter } // ... }
The
fn_params_excessive_bools
lint will warn if you use multiple bools as arguments in your methods. Those can easily be confused, leading to logic errors.fn frobnicate(is_foo: bool, is_bar: bool) { ... } // use types to avoid confusion enum Fooish { Foo NotFoo }
Clippy looks for a
clippy.toml
configuration file you may want to use in your project:# for non-library or unstable API projects avoid-breaking-exported-api = false # let's allow even less bools max-fn-params-bools = 2 # allow certain things in tests # (if you activate the restriction lints) allow-dbg-in-tests = true allow-expect-in-tests = true allow-indexing-slicing-in-tests = true allow-panic-in-tests = true allow-unwrap-in-tests = true allow-print-in-tests = true allow-useless-vec-in-tests = true
Cargo-Semver-Checks
If you have a library, please use cargo semver-checks before cutting a release.
$ cargo semver-checks Building optional v0.5.0 (current) Built [ 1.586s] (current) Parsing optional v0.5.0 (current) Parsed [ 0.004s] (current) Building optional v0.5.0 (baseline) Built [ 0.306s] (baseline) Parsing optional v0.5.0 (baseline) Parsed [ 0.003s] (baseline) Checking optional v0.5.0 -> v0.5.0 (no change) Checked [ 0.005s] 148 checks: 148 pass, 0 skip Summary no semver update required Finished [ 10.641s] optional
Your users will be glad you did.
Cargo test
First, doctests are fast now (apart from
compile_fail
ones), so if you avoided them to keep your turnaround time low, you may want to reconsider.Also if you have a binary crate, you can still use
#[test]' by converting your crate to a mixed crate. Put this in your
Cargo.toml`:[lib] name = "my_lib" path = "src/lib.rs" [[bin]] name = "my_bin" path = "src/main.rs"
Now you can test all items you have in
lib.rs
and any and all modules reachable from there.Insta
Insta is a crate to do snapshot tests. That means it will use the debug representation or a serialization in JSON or YAML to create a āsnapshotā once, then complain if the snapshot has changed. This removes the need to come up with known good values, since your tests will create them for you.
#[test] fn snapshot_test() { insta::assert_debug_snapshot!(my_function()); }
insta has a few tricks up its sleeve to deal with uncertainty arising from indeterminism. You can redact the output to e.g. mask randomly chosen IDs:
#[test] fn redacted_snapshot_test() { insta::assert_json_snapshot!( my_function(), { ".id" => "[id]" } ); }
The replacement can also be a function. I have used this with a
Mutex<HashMap<..>>
in the past to replace random IDs with sequence numbers to ensure that equal IDs stay equal while ignoring their randomness.Cargo Mutants
Mutation testing is a cool technique where you change your code to check your tests. It will apply certain changes (for example replacing a
+
with a-
or returning a default value instead of the function result) to your code and see if tests fail. Those changes are called mutations (or sometimes mutants) and if they donāt fail any tests, they are deemed āaliveā.I wrote a bit about that technique in the past and even wrote a tool to do that as a proc macro. Unfortunately, it used specialization and as such was nightly only, so nowadays I recommend
cargo-mutants
. A typical run might look like this:$ cargo mutants Found 309 mutants to test ok Unmutated baseline in 3.0s build + 2.1s test INFO Auto-set test timeout to 20s MISSED src/lib.rs:1448:9: replace <impl Deserialize for Optioned<T>>::deserialize -> Result<Optioned<T>, D::Error> with Ok(Optioned::from_iter([Default::default()])) in 0.3s build + 2.1s test MISSED src/lib.rs:1425:9: replace <impl Hash for Optioned<T>>::hash with () in 0.3s build + 2.1s test MISSED src/lib.rs:1202:9: replace <impl OptEq for u64>::opt_eq -> bool with false in 0.3s build + 2.1s test TIMEOUT src/lib.rs:972:9: replace <impl From for Option<bool>>::from -> Option<bool> with Some(false) in 0.4s build + 20.0s test MISSED src/lib.rs:1139:9: replace <impl Noned for isize>::get_none -> isize with 0 in 0.4s build + 2.3s test MISSED src/lib.rs:1228:14: replace == with != in <impl OptEq for i64>::opt_eq in 0.3s build + 2.1s test MISSED src/lib.rs:1218:9: replace <impl OptEq for i16>::opt_eq -> bool with false in 0.3s build + 2.1s test MISSED src/lib.rs:1248:9: replace <impl OptEq for f64>::opt_eq -> bool with true in 0.4s build + 2.1s test MISSED src/lib.rs:1239:9: replace <impl OptEq for f32>::opt_eq -> bool with false in 0.4s build + 2.1s test ... 309 mutants tested in 9m 26s: 69 missed, 122 caught, 112 unviable, 6 timeouts
Unlike code coverage, mutation testing not only finds which code is run by your tests, but which code is actually tested against changes ā at least as far as they can be automatically applied.
Also mutation testing can give you the information which tests cover what possible mutations, so you sometimes can remove some tests, making your test suite leaner and faster.
rust-analyzer
I just gave a few settings that may improve your experience:
# need to install the rust-src component with rustup rust-analyzer.rustc.source = "discover" # on auto-import, prefer importing from `prelude` rust-analyzer.imports.preferPrelude = true # don't look at references from tests rust-analyzer.references.excludeTests = true
cargo sweep
If you are like me, you can get a very large
target/
folder.cargo sweep
will remove outdated build artifacts:$ cargo sweep --time 14 # remove build artifacts older than 2 weeks $ cargo sweep --installed # remove build artifacts from old rustcs
Pro Tip: Add a cronjob (for example every Friday on 10 AM):
0 10 * * fri sh -c "rustup update && cargo sweep --installed"
cargo wizard
This is a subcommand that will give you a TUI to configure your project, giving you a suitable Cargo.toml etc.
cargo pgo
Profile-guided optimization is a great technique to eke out that last bit of performance without needing any code changes. I didnāt go into detail on it because Aliaksandr Zaitsau did a whole talk on it and I wanted to avoid the duplication.
cargo component
This tool will allow you to run your code locally under a WASM runtime.
- Run your code in
wasm32-wasip1
(or later) - the typical subcommands (
test
,run
, etc.) work as usual -
can use a target runner:
[target.wasm32-wasip1] runner = ["wasmtime", "--dir=."]
The argument is used to allow accessing the current directory (because by default the runtime will disallow all file access). You can of course also use different directories there.
bacon
compiles and runs tests on changes
great to have in a sidebar terminal
ānuff said.
Language: Pattern matching
Rust patterns are super powerful. You can
- destructure tuples and slices
- match integer and char ranges
- or-combine patterns with the pipe symbol, even within other patterns (note that the bindings need to have the same types). You can even use a pipe at the start of your pattern to get a nice vertical line in your code (see below)
-
use guard clauses within patterns (
pattern if guard(pattern) => arm
)match (foo, bar) { (1, [a, b, ..]) => todo!(), (2 ..= 4, x) if predicate(x) => frobnicate(x), (5..8, _) => todo!(), _ => () }
if let Some(1 | 23) | None = x { todo!() }
match foo { | Foo::Bar | Foo::Baz(Baz::Blorp | Baz::Blapp) | Foo::Boing(_) | Foo::Blammo(..) => todo!(), _ => () }
matches!(foo, Foo::Bar)
Also patterns may appear in surprising places: Arguments in function signatures are patterns, too ā and so are closure arguments:
fn frobnicate(Bar { baz, blorp }: Bar) { let closure = |Blorp(flip, flop)| blorp(flip, flop); }
Whatās more, patterns can be used in
let
and in plain assignments:let (a, mut b, mut x) = (c, d, z); let Some((e, f)) = foo else { return; }; (b, x) = (e, f);
As you can see, with plain
let
and assignment, you need an irrefutable pattern (that must always match by definition), otherwise you can dolet- else
.Language: Annotations
use
#[expect(..)]
instead of#[allow(..)]
, because it will warn if the code in question is no longer linted (either because the code or clippy changed), so the#[allow(..)]
will just linger.#[expect(clippy::collapsible_if) fn foo(b: bool, c: u8) [ if b { if c < 25 { todo!(); } } }
Add
#[must_use]
judiciously on library APIs to help your users avoid mistakes. Thereās even a pedanticclippy::must_use_candidates
lint that you can auto-apply to help you do it.You can also annotate types that should always be used when returned from functions.
#[must_use] fn we_care_for_the_result() -> Foo { todo!() } #[must_use] enum MyResult<T> { Ok(T), Err(crate::Error), SomethingElse } we_care_for_the_result(); // Err: unused_must_use returns_my_result(); // Err: unused_must_use
Traits sometimes need special handling. Tell your users what to do:
#[diagnostic::on_unimplemented( message = "Don't `impl Fooable<{T}>` directly, `#[derive(Bar)]` on `{Self}` instead", label = "This is the {Self}" note = "additional context" )] trait Fooable<T> { .. }
Sometimes, you want internals to stay out of the compilerās error messages:
#[diagnostic::do_not_recommend] impl Fooable for FooInner { .. }
library:
Box::leak
For
&'static
, once-initialized things that donāt need to bedrop
pedlet config: &'static Configuration = Box::leak(create_config()); main_entry_point(config);
The End?
Thatās all I could fit in my talk, so thanks for reading this far.
- project-wide:
-
š Baby Steps Rust turns 10 rss
Today is the 10th anniversary of Rust's 1.0 release. Pretty wild. As part of RustWeek there was a fantastic celebration and I had the honor of giving some remarks, both as a long-time project member but also as representing Amazon as a sponsor. I decided to post those remarks here on the blog.
"It's really quite amazing to see how far Rust has come. If I can take a moment to put on my sponsor hat, I've been at Amazon since 2021 now and I have to say, it's been really cool to see the impact that Rust is having there up close and personal.
"At this point, if you use an AWS service, you are almost certainly using something built in Rust. And how many of you watch videos on PrimeVideo? You're watching videos on a Rust client, compiled to WebAssembly, and shipped to your device.
"And of course it's not just Amazon, it seems like all the time I'm finding out about this or that surprising place that Rust is being used. Just yesterday I really enjoyed hearing about how Rust was being used to build out the software for tabulating votes in the Netherlands elections. Love it.
"On Tuesday, Matthias Endler and I did this live podcast recording. He asked me a question that has been rattling in my brain ever since, which was, 'What was it like to work with Graydon?'
"For those who don't know, Graydon Hoare is of course Rust's legendary founder. He was also the creator of Monotone, which, along with systems like Git and Mercurial, was one of the crop of distributed source control systems that flowered in the early 2000s. So defintely someone who has had an impact over the years.
"Anyway, I was thinking that, of all the things Graydon did, by far the most impactful one is that he articulated the right visions. And really, that's the most important thing you can ask of a leader, that they set the right north star. For Rust, of course, I mean first and foremost the goal of creating 'a systems programming language that won't eat your laundry'.
"The specifics of Rust have changed a LOT over the years, but the GOAL has stayed exactly the same. We wanted to replicate that productive, awesome feeling you get when using a language like Ocaml - but be able to build things like web browsers and kernels. 'Yes, we can have nice things', is how I often think of it. I like that saying also because I think it captures something else about Rust, which is trying to defy the 'common wisdom' about what the tradeoffs have to be.
"But there's another North Star that I'm grateful to Graydon for. From the beginning, he recognized the importance of building the right culture around the language, one committed to 'providing a friendly, safe and welcoming environment for all, regardless of level of experience, gender identity and expression, disability, nationality, or other similar characteristic', one where being 'kind and courteous' was prioritized, and one that recognized 'there is seldom a right answer' - that 'people have differences of opinion' and that 'every design or implementation choice carries a trade-off'.
"Some of you will probably have recognized that all of these phrases are taken straight from Rust's Code of Conduct which, to my knowledge, was written by Graydon. I've always liked it because it covers not only treating people in a respectful way - something which really ought to be table stakes for any group, in my opinion - but also things more specific to a software project, like the recognition of design trade-offs.
"Anyway, so thanks Graydon, for giving Rust a solid set of north stars to live up to. Not to mention for the
fn
keyword. Raise your glass!"For myself, a big part of what drew me to Rust was the chance to work in a truly open-source fashion. I had done a bit of open source contribution - I wrote an extension to the ASM bytecode library, I worked some on PyPy, a really cool Python compiler - and I loved that feeling of collaboration.
"I think at this point I've come to see both the pros and cons of open source - and I can say for certain that Rust would never be the language it is if it had been built in a closed source fashion. Our North Star may not have changed but oh my gosh the path we took to get there has changed a LOT. So many of the great ideas in Rust came not from the core team but from users hitting limits, or from one-off suggestions on IRC or Discord or Zulip or whatever chat forum we were using at that particular time.
"I wanted to sit down and try to cite a bunch of examples of influential people but I quickly found the list was getting ridiculously long - do we go all the way back, like the way Brian Anderson built out the
#[test]
infrastructure as a kind of quick hack, but one that lasts to this day? Do we cite folks like Sophia Turner and Esteban Kuber's work on error messages? Or do we look at the many people stretching the definition of what Rust is today ⦠the reality is, once you start, you just can't stop."So instead I want to share what I consider to be an amusing story, one that is very Rust somehow. Some of you may have heard that in 2024 the ACM, the major academic organization for computer science, awarded their SIGPLAN Software Award to Rust. A big honor, to be sure. But it caused us a bit of a problem - what names should be on there? One of the organizers emailed me, Graydon, and a few other long-time contributors to ask us our opinion. And what do you think happened? Of course, we couldn't decide. We kept coming up with different sets of people, some of them absurdly large - like thousands of names - others absurdly short, like none at all. Eventually we kicked it over to the Rust Leadership Council to decide. Thankfully they came up with a decent list somehow.
"In any case, I just felt that was the most Rust of all problems: having great success but not being able to decide who should take credit. The reality is there is no perfect list - every single person who got named on that award richly deserves it, but so do a bunch of people who aren't on the list. That's why the list ends with All Rust Contributors, Past and Present - and so a big shout out to everyone involved, covering the compiler, the tooling, cargo, rustfmt, clippy, core libraries, and of course organizational work. On that note, hats off to Mara, Eric Jonkers, and the RustNL team that put on this great event. You all are what makes Rust what it is.
"Speaking for myself, I think Rust's penchant to re-imagine itself, while staying true to that original north star, is the thing I love the most. 'Stability without stagnation' is our most important value. The way I see it, as soon as a language stops evolving, it starts to die. Myself, I look forward to Rust getting to a ripe old age, interoperating with its newer siblings and its older aunts and uncles, part of the 'cool kids club' of widely used programming languages for years to come. And hey, maybe we'll be the cool older relative some day, the one who works in a bank but, when you talk to them, you find out they were a rock-and-roll star back in the day.
"But I get ahead of myself. Before Rust can get there, I still think we've some work to do. And on that note I want to say one other thing - for those of us who work on Rust itself, we spend a lot of time looking at the things that are wrong - the bugs that haven't been fixed, the parts of Rust that feel unergonomic and awkward, the RFC threads that seem to just keep going and going, whatever it is. Sometimes it feels like that's ALL Rust is - a stream of problems and things not working right.
"I've found there's really only one antidote, which is getting out and talking to Rust users - and conferences are one of the best ways to do that. That's when you realize that Rust really is something special. So I do want to take a moment to thank all of you Rust users who are here today. It's really awesome to see the things you all are building with Rust and to remember that, in the end, this is what it's all about: empowering people to build, and rebuild, the foundational software we use every day. Or just to 'hack without fear', as Felix Klock legendarily put it.
"So yeah, to hacking!"
-
- May 14, 2025
-
š pixelspark/sushitrain v1.10.39 release
No content.
-
š @trailofbits@infosec.exchange Passkeys are the most important security technology of the past 10 years. You mastodon
Passkeys are the most important security technology of the past 10 years. You should know how they work.
Read the blog:
https://blog.trailofbits.com/2025/05/14/the-cryptography-behind- passkeys/ -
š @HexRaysSA@infosec.exchange We're looking forward to mastodon
We're looking forward to @offensive_con this week! We're proud to be partnering with @blackhoodie to bring a 1-day free training for women, by women.
If you're there, we'd love to meet up! Catch us in the hallway for a casual chat, or book something more formal via the link below.
Either way, we want to hear from you - tell us everything you love about IDA, and all the things you don't, and new features you'd like to have.
Book a meeting: https://meetings-eu1.hubspot.com/justine- benjamin/offensivecon-2025
-
š matklad Scalar Select Anti-Pattern rss
Scalar Select Anti-Pattern May 14, 2025
Iāve written a number of stateful services starting with an event loop at the core:
async for event in events_incoming: await process(event)
I had to refactor this loop later, every time. For an example, see the direct cause for the article, this TigerBeetle PR. Let me write the refactoring down, so that I get it right from the get go next time!
[Scalar Select ](https://matklad.github.io/2025/05/14/scalar-select-aniti-
pattern.html#Scalar-Select)
Letās say I am implementing an LSP server for some programming language. Thereās going to be three main sources of events:
- file modifications from user typing code in or git operations,
- requests from the client (āgive me completions!ā),
- output from compilation jobs running in the background with error messages and such.
The āobviousā event loop setup for this case looks like this:
events_incoming: Stream[Event] = merge( events_file_system, events_compiler, events_lsp, ) async for event in events_incoming: await process(event)
Here,
merge
is an operator that takes a number of event streams, and merges them into one. This is aloop { select! { ... } }
written with higher-order functions.
[Key Observation ](https://matklad.github.io/2025/05/14/scalar-select-
aniti-pattern.html#Key-Observation)
Crucially, event streams are external to the process and are driven by the outside IO. You donāt really know or have control over when the user is typing!
And
process(event)
takes time. This means that when weāve finished processing the current event, there might be several events āreadyā, already sitting in the address space of our process. Our āscalar selectā will pick an arbitrary one, and that might create a lot of overhead.[Implications ](https://matklad.github.io/2025/05/14/scalar-select-aniti-
pattern.html#Implications)
Here are some specific optimizations you can apply if you donāt ignore the fact that multiple events are available at the same time after the delay induced by processing the previous event.
Prioritization
The most obvious one, we can pick the order in which to process events. For the LSP example, if you have a code completion request, and a file modification request, you want to process file modification first. The rule- of-thumb priority is writes over reads over accepts (of new clients).
Selective Backpressure
As an extreme form of prioritization, you can decide to not process a particular kind of request at all, exerting backpressure against a specific input, while allowing other inputs to proceed.
Elimination
Often, a request can be dropped completely depending on subsequent events. For example, if thereās a file modification that completely replaces its text, all preceding changes can be safely dropped.
Coalescing
Finally, even if it is not possible to completely eliminate the request, often it is more efficient to process several requests at the same time. For example, if we have two incremental file modification events (like āinsert
'hello'
at offset92
ā), it makes sense to union them into one large change first. An LSP server will kick off a job to compute diagnostics after applying modifications. But if we have two modifications, we want to apply them both before starting a single diagnostics job.[Data Oriented All The Things
](https://matklad.github.io/2025/05/14/scalar-select-aniti-pattern.html#Data- Oriented-All-The-Things)
Once you see the problem (the hard part), the solution is as expected: always be batching , forget the singulars, push the
for
s down, become multitude!In other words, we want to change our scalar select that gives us a single event at a time into a batched one, which gives all the events already received. Under low load, weāll be getting singleton batches. But as the load increases, the batch size will grow, increasing our load sublinearly!
So, something like this:
events_incoming: Stream[Event] = merge( events_file_system, events_compiler, events_lsp, ) events_incoming_batched: Stream[List[Event]] = batch_stream(events_incoming) async for event_batch in events_incoming_batched: batch: List[Event] = coalesce(event_batch) for event in batch: await process(event)
The secret sauce is the
batch_stream
function which waits until at least one event is available, but pulls all available events:async def batch_stream( stream: Stream[T] ) -> Stream[List[T]]: while True: event: T = await stream.next() batch: List[T] = [event] while event := stream.next_non_blocking(): batch.append(event) yield batch
Always be batching when messag*es* passing!
-