- β
- β
- How to Write Compelling Software Release Announcements Β· Refactoring English
- GitLab Values | The GitLab Handbook
- How to make task list less depressing? - The Workplace Stack Exchange
- Our Values - Zed - The editor for what's next
- Shottr β Screenshot Annotation App For Mac
- June 26, 2025
-
π @HexRaysSA@infosec.exchange Power users are taking IDA headless with idalib. mastodon
Power users are taking IDA headless with idalib.
Think large-scale diffing, custom UIs, and CI pipelines... all without launching a GUI.π‘ Get inspired: https://hex-rays.com/blog/4-powerful-applications-of-idalib- headless-ida-in-action
-
π News Minimalist Scientists begin creating synthetic human DNA + 7 more stories rss
In the last 2 days ChatGPT read 22477 top news stories. After removing previously covered events, there are 8 articles with a significance score over 5.9.
[6.7] Scientists begin creating synthetic human DNA βbbc.com(+3)
Scientists have launched the Synthetic Human Genome Project to artificially create human DNA. This controversial project is funded by an initial Β£10m grant from the Wellcome Trust.
The project aims to build sections of human DNA from scratch to better understand genes and develop treatments for diseases. Researchers hope to create disease-resistant cells for damaged organs.
Concerns exist that the technology could be misused to create designer babies or for warfare.
[6.1] Judge says Meta's AI training on books didn't violate copyrights βwired.com(+34)
A federal judge ruled Wednesday that Meta did not infringe on copyright by training its AI models on 13 authors' books, a significant win for the tech giant.
The judge granted Meta summary judgment, stating the authors failed to prove Meta's use of their books caused financial harm, a key element in copyright infringement cases. This decision follows a similar ruling earlier this week regarding Anthropic's AI training practices.
While a victory for Meta, the ruling is limited to the specific case and doesn't guarantee future legality; other authors could still sue. The case was one of the first of its kind, with many similar AI copyright lawsuits pending.
Highly covered news with significance over 5.5
[5.9] Webb telescope directly images exoplanet for the first time β theguardian.com (+19)
[5.7] Chinese satellite internet reaches 1Gbps, five times faster than Starlink β techradar.com (+2)
[5.7] Iranβs parliament approves bill to suspend cooperation with IAEA β theguardian.com (+5)
[5.5] Google rolls out new Gemini model that can run on robots locally β techcrunch.com (+5)
[5.5] Study reveals 90% drop in heart attack deaths over 50 years β med.stanford.edu (+6)
[5.5] Baltic overfishing changed cod genetics, shrinking their size β phys.org (+2)
Thanks for reading!
β Vadim
You can set up and personalize your own newsletter like this with premium.
-
π organicmaps/organicmaps 2025.06.26-3-android release
β’ New OSM data as of June 8
β’ Fixed OSM login
β’ Save planned routes as tracks
β’ Display plant nurseries, traffic barriers, studios, firepits, ladders, cranes, and love hotels
β’ Qingdao metro icons
β’ Show azimuth when tapping on a small arrow with distance
β’ Fixed crashes and white-on-white text on Android 5&6
β’ Fixed search for Lithuanian and other languages
β’ Fix location arrow jumps when navigation is active
β’ Fixed Cancel Download button
β’ Improved translationsβ¦more at omaps.org/news
See a detailed announce on our website when app updates are published in all stores.
You can get automatic app updates from GitHub using Obtainium.sha256sum:
e2cb9d1818cfe21ad5b733936352db504df19f1e01cb3302ed7f4a37016bff8c OrganicMaps-25062603-web-release.apk
-
π Anton Zhiyanov Go 1.25 interactive tour rss
Go 1.25 is scheduled for release in August, so it's a good time to explore what's new. The official release notes are pretty dry, so I prepared an interactive version with lots of examples showing what has changed and what the new behavior is.
Read on and see!
synctest β’ json/v2 β’ GOMAXPROCS β’ New GC β’ Anti- CSRF β’ WaitGroup.Go β’ FlightRecorder β’ os.Root β’ reflect.TypeAssert β’ T.Attr β’ slog.GroupAttrs β’ hash.Cloner
This article is based on the official release notes from The Go Authors, licensed under the BSD-3-Clause license. This is not an exhaustive list; see the official release notes for that.
I provide links to the proposals (π£) and commits (ππ) for the features described. Check them out for motivation and implementation details.
Error handling is often skipped to keep things simple. Don't do this in production γ
# Synthetic time
for testing
Suppose we have a function that waits for a value from a channel for one minute, then times out:
// Read reads a value from the input channel and returns it. // Timeouts after 60 seconds. func Read(in chan int) (int, error) { select { case v := <-in: return v, nil case <-time.After(60 * time.Second): return 0, errors.New("timeout") } }
We use it like this:
func main() { ch := make(chan int) go func() { ch <- 42 }() val, err := Read(ch) fmt.Printf("val=%v, err=%v\n", val, err) } val=42, err=<nil>
How do we test the timeout situation? Surely we don't want the test to actually wait 60 seconds. We could make the timeout a parameter (we probably should), but let's say we prefer not to.
The new
synctest
package to the rescue! Thesynctest.Test
function executes an isolated "bubble". Within the bubble,time
package functions use a fake clock, allowing our test to pass instantly:func TestReadTimeout(t *testing.T) { synctest.Test(t, func(t *testing.T) { ch := make(chan int) _, err := Read(ch) if err == nil { t.Fatal("expected timeout error, got nil") } }) } PASS
The initial time in the bubble is midnight UTC 2000-01-01. Time advances when every goroutine in the bubble is blocked. In our example, when the only goroutine is blocked on
select
inRead
, the bubble's clock advances 60 seconds, triggering the timeout case.Keep in mind that the
t
passed to the inner function isn't exactly the usualtesting.T
. In particular, you should never callT.Run
,T.Parallel
, orT.Deadline
on it:func TestSubtest(t *testing.T) { synctest.Test(t, func(t *testing.T) { t.Run("subtest", func (t *testing.T) { t.Log("ok") }) }) } panic: testing: t.Run called inside synctest bubble [recovered, repanicked]
So, no inner tests inside the bubble.
Another useful function is
synctest.Wait
. It waits for all goroutines in the bubble to block, then resumes execution:func TestWait(t *testing.T) { synctest.Test(t, func(t *testing.T) { var innerStarted bool done := make(chan struct{}) go func() { innerStarted = true time.Sleep(time.Second) close(done) }() // Wait for the inner goroutine to block on time.Sleep. synctest.Wait() // innerStarted is guaranteed to be true here. fmt.Printf("inner started: %v\n", innerStarted) <-done }) } inner started: true
Try commenting out the
Wait()
call and see how theinnerStarted
value changes.The
testing/synctest
package was first introduced as experimental in version 1.24. It's now considered stable and ready to use. Note that theRun
function, which was added in 1.24, is now deprecated. You should useTest
instead.π£ 67434, 73567 β’ ππ 629735, 629856, 671961
# JSON v2
The second version of the
json
package is a big update, and it has a lot of breaking changes. That's why I wrote a separate post with a detailed review of what's changed and lots of interactive examples.Here, I'll just show one of the most impressive features.
With
json/v2
you're no longer limited to just one way of marshaling a specific type. Now you can use custom marshalers and unmarshalers whenever you need, with the genericMarshalToFunc
andUnmarshalFromFunc
functions.For example, we can marshal boolean values (
true
/false
) and "boolean-like" strings (on
/off
) toβ
orβ
β all without creating a single custom type!First we define a custom marshaler for bool values:
// Marshals boolean values to β or β. boolMarshaler := json.MarshalToFunc( func(enc *jsontext.Encoder, val bool) error { if val { return enc.WriteToken(jsontext.String("β")) } return enc.WriteToken(jsontext.String("β")) }, )
Then we define a custom marshaler for bool-like strings:
// Marshals boolean-like strings to β or β. strMarshaler := json.MarshalToFunc( func(enc *jsontext.Encoder, val string) error { if val == "on" || val == "true" { return enc.WriteToken(jsontext.String("β")) } if val == "off" || val == "false" { return enc.WriteToken(jsontext.String("β")) } // SkipFunc is a special type of error that tells Go to skip // the current marshaler and move on to the next one. In our case, // the next one will be the default marshaler for strings. return json.SkipFunc }, )
Finally, we combine marshalers with
JoinMarshalers
and pass them to the marshaling function using theWithMarshalers
option:// Combine custom marshalers with JoinMarshalers. marshalers := json.JoinMarshalers(boolMarshaler, strMarshaler) // Marshal some values. vals := []any{true, "off", "hello"} data, err := json.Marshal(vals, json.WithMarshalers(marshalers)) fmt.Println(string(data), err) ["β","β","hello"] <nil>
Isn't that cool?
There are plenty of other goodies, like support for I/O readers and writers, nested objects inlining, a plethora of options, and a huge performance boost. So, again, I encourage you to check out the post dedicated to v2 changes.
# Container-
aware GOMAXPROCS
The
GOMAXPROCS
runtime setting controls the maximum number of operating system threads the Go scheduler can use to execute goroutines concurrently. Since Go 1.5, it defaults to the value ofruntime.NumCPU
, which is the number of logical CPUs on the machine (strictly speaking, this is either the total number of logical CPUs or the number allowed by the CPU affinity mask, whichever is lower).For example, on my 8-core laptop, the default value of
GOMAXPROCS
is also 8:maxProcs := runtime.GOMAXPROCS(0) // returns the current value fmt.Println("NumCPU:", runtime.NumCPU()) fmt.Println("GOMAXPROCS:", maxProcs) NumCPU: 8 GOMAXPROCS: 8
Go programs often run in containers, like those managed by Docker or Kubernetes. These systems let you limit the CPU resources for a container using a Linux feature called cgroups.
A cgroup (control group) in Linux lets you group processes together and control how much CPU, memory, and network I/O they can use by setting limits and priorities.
For example, here's how you can limit a Docker container to use only four CPUs:
docker run --cpus=4 golang:1.24-alpine go run /app/nproc.go
Before version 1.25, the Go runtime didn't consider the CPU quota when setting the
GOMAXPROCS
value. No matter how you limited CPU resources,GOMAXPROCS
was always set to the number of logical CPUs on the host machine:docker run --cpus=4 golang:1.24-alpine go run /app/nproc.go NumCPU: 8 GOMAXPROCS: 8
Now, the Go runtime started to respect the CPU quota:
docker run --cpus=4 golang:1.25rc1-alpine go run /app/nproc.go NumCPU: 8 GOMAXPROCS: 4
The default value of
GOMAXPROCS
is now set to either the number of logical CPUs or the CPU limit enforced by cgroup settings for the process, whichever is lower.Fractional CPU limits are rounded up:
docker run --cpus=2.3 golang:1.25rc1-alpine go run /app/nproc.go NumCPU: 8 GOMAXPROCS: 3
On a machine with multiple CPUs, the minimum default value for
GOMAXPROCS
is 2, even if the CPU limit is set lower:docker run --cpus=1 golang:1.25rc1-alpine go run /app/nproc.go NumCPU: 8 GOMAXPROCS: 2
The Go runtime automatically updates
GOMAXPROCS
if the CPU limit changes. The current implementation updates up to once per second (less if the application is idle).Note on CPU limits
Cgroups actually offer not just one, but two ways to limit CPU resources:
- CPU quota β the maximum CPU time the cgroup may use within some period window.
- CPU shares β relative CPU priorities given to the kernel scheduler.
Docker's
--cpus
and--cpu-period
/--cpu-quota
set the quota, while--cpu-shares
sets the shares.Kubernetes' CPU limit sets the quota, while CPU request sets the shares.
Go's runtime
GOMAXPROCS
only takes the CPU quota into account, not the shares.You can set
GOMAXPROCS
manually using theruntime.GOMAXPROCS
function. In this case, the runtime will use the value you provide and won't try to change it:runtime.GOMAXPROCS(4) fmt.Println("GOMAXPROCS:", runtime.GOMAXPROCS(0)) GOMAXPROCS: 4
You can also undo any manual changes you made β whether by setting the
GOMAXPROCS
environment variable or callingruntime.GOMAXPROCS()
β and return to the default value chosen by the runtime. To do this, use the newruntime.SetDefaultGOMAXPROCS
function:GOMAXPROCS=2 go1.25rc1 run nproc.go // Using the environment variable. fmt.Println("GOMAXPROCS:", runtime.GOMAXPROCS(0)) // Using the manual setting. runtime.GOMAXPROCS(4) fmt.Println("GOMAXPROCS:", runtime.GOMAXPROCS(0)) // Back to the default value. runtime.SetDefaultGOMAXPROCS() fmt.Println("GOMAXPROCS:", runtime.GOMAXPROCS(0)) GOMAXPROCS: 2 GOMAXPROCS: 4 GOMAXPROCS: 8
To provide backward compatibility, the new
GOMAXPROCS
behavior only takes effect if your program uses Go version 1.25 or higher in yourgo.mod
. You can also turn if off manually using these GODEBUG settings:containermaxprocs=0
to ignore the cgroups CPU quota.updatemaxprocs=0
to preventGOMAXPROCS
from updating automatically.
π£ 73193 β’ ππ 668638, 670497, 672277, 677037
# Green Tea
garbage collector
Some of us Go folks used to joke about Java and its many garbage collectors, ~~each worse than the other~~. Now the joke's on us β there's a new experimental garbage collector coming to Go.
Green Tea is a garbage collecting algorithm optimized for programs that create lots of small objects and run on modern computers with many CPU cores.
The current GC scans memory in a way that jumps around a lot, which is slow because it causes many delays waiting for memory. The problem gets even worse on high-performance systems with many cores and non-uniform memory architectures, where each CPU or group of CPUs has its own "local" RAM.
Green Tea takes a different approach. Instead of scanning individual small objects, it scans memory in much larger, contiguous blocks β spans. Each span contains many small objects of the same size. By working with bigger blocks, the GC can scan more efficiently and make better use of the CPU's memory cache.
Benchmark results vary, but the Go team expects a 10β40% reduction in garbage collection overhead in real-world programs that heavily use the GC.
I ran an informal test by doing 1,000,000 reads and writes to Redka (my Redis clone written in Go), and observed similar GC pause times both the old and new GC algorithms. But Redka probably isn't the best example here because it relies heavily on SQLite, and the Go part is pretty minimal.
The new garbage collector is experimental and can be enabled by setting
GOEXPERIMENT=greenteagc
at build time. The design and implementation of the GC may change in future releases. For more information or to give feedback, see the proposal.# CSRF protection
The new
http.CrossOriginProtection
type implements protection against cross-site request forgery (CSRF) by rejecting non-safe cross-origin browser requests.It detects cross-origin requests in these ways:
- By checking the Sec-Fetch-Site header, if it's present.
- By comparing the hostname in the Origin header to the one in the Host header.
Here's an example where we enable
CrossOriginProtection
and explicitly allow some extra origins:// Register some handlers. mux := http.NewServeMux() mux.HandleFunc("GET /get", func(w http.ResponseWriter, req *http.Request) { io.WriteString(w, "ok\n") }) mux.HandleFunc("POST /post", func(w http.ResponseWriter, req *http.Request) { io.WriteString(w, "ok\n") }) // Configure protection against CSRF attacks. antiCSRF := http.NewCrossOriginProtection() antiCSRF.AddTrustedOrigin("https://example.com") antiCSRF.AddTrustedOrigin("https://*.example.com") // Add CSRF protection to all handlers. srv := http.Server{ Addr: ":8080", Handler: antiCSRF.Handler(mux), } log.Fatal(srv.ListenAndServe())
Now, if the browser sends a request from the same domain the server is using, the server will allow it:
curl --data "ok" -H "sec-fetch-site:same-origin" localhost:8080/post ok
If the browser sends a cross-origin request, the server will reject it:
curl --data "ok" -H "sec-fetch-site:cross-site" localhost:8080/post cross-origin request detected from Sec-Fetch-Site header
If the Origin header doesn't match the Host header, the server will reject the request:
curl --data "ok" \ -H "origin:https://evil.com" \ -H "host:antonz.org" \ localhost:8080/post cross-origin request detected, and/or browser is out of date: Sec-Fetch-Site is missing, and Origin does not match Host
If the request comes from a trusted origin, the server will allow it:
curl --data "ok" \ -H "origin:https://example.com" \ -H "host:antonz.org" \ localhost:8080/post ok
The server will always allow GET, HEAD, and OPTIONS methods because they are safe:
curl -H "origin:https://evil.com" localhost:8080/get ok
The server will always allow requests without Sec-Fetch-Site or Origin headers (by design):
curl --data "ok" localhost:8080/post ok
π£ 73626 β’ ππ 674936, 680396
# Go wait group, go!
Everyone knows how to run a goroutine with a wait group:
var wg sync.WaitGroup wg.Add(1) go func() { defer wg.Done() fmt.Println("go is awesome") }() wg.Add(1) go func() { defer wg.Done() fmt.Println("cats are cute") }() wg.Wait() fmt.Println("done") cats are cute go is awesome done
The new
WaitGroup.Go
method automatically increments the wait group counter, runs a function in a goroutine, and decrements the counter when it's done. This means we can rewrite the example above without usingwg.Add()
andwg.Done()
:var wg sync.WaitGroup wg.Go(func() { fmt.Println("go is awesome") }) wg.Go(func() { fmt.Println("cats are cute") }) wg.Wait() fmt.Println("done") cats are cute go is awesome done
The implementation is just what you'd expect:
// https://github.com/golang/go/blob/master/src/sync/waitgroup.go func (wg *WaitGroup) Go(f func()) { wg.Add(1) go func() { defer wg.Done() f() }() }
It's curious that it took the Go team 13 years to add a simple "Add+Done" wrapper. But hey, better late than never!
π£ 63796 β’ ππ 662635
# Flight recording
Flight recording is a tracing technique that collects execution data, such as function calls and memory allocations, within a sliding window that's limited by size or duration. It helps to record traces of interesting program behavior, even if you don't know in advance when it will happen.
The new
trace.FlightRecorder
type implements a flight recorder in Go. It tracks a moving window over the execution trace produced by the runtime, always containing the most recent trace data.Here's an example of how you might use it.
First, configure the sliding window:
// Configure the flight recorder to keep // at least 5 seconds of trace data, // with a maximum buffer size of 3MB. // Both of these are hints, not strict limits. cfg := trace.FlightRecorderConfig{ MinAge: 5 * time.Second, MaxBytes: 3 << 20, // 3MB }
Then create the recorder and start it:
// Create and start the flight recorder. rec := trace.NewFlightRecorder(cfg) rec.Start() defer rec.Stop()
Continue with the application code as usual:
// Simulate some workload. done := make(chan struct{}) go func() { defer close(done) const n = 1 << 20 var s []int for range n { s = append(s, rand.IntN(n)) } fmt.Printf("done filling slice of %d elements\n", len(s)) }() <-done
Finally, save the trace snapshot to a file when an important event occurs:
// Save the trace snapshot to a file. file, _ := os.Create("/tmp/trace.out") defer file.Close() n, _ := rec.WriteTo(file) fmt.Printf("wrote %dB to trace file\n", n) done filling slice of 1048576 elements wrote 8441B to trace file
Use the
go tool
to view the trace in the browser:go1.25rc1 tool trace /tmp/trace.out
π£ 63185 β’ ππ 673116
# More Root methods
The
os.Root
type, which limits filesystem operations to a specific directory, now supports several new methods similar to functions already found in theos
package.Chmod
changes the mode of a file:root, _ := os.OpenRoot("data") root.Chmod("01.txt", 0600) finfo, _ := root.Stat("01.txt") fmt.Println(finfo.Mode().Perm()) -rw-------
Chown
changes the numeric user ID (uid) and group ID (gid) of a file:root, _ := os.OpenRoot("data") root.Chown("01.txt", 1000, 1000) finfo, _ := root.Stat("01.txt") stat := finfo.Sys().(*syscall.Stat_t) fmt.Printf("uid=%d, gid=%d\n", stat.Uid, stat.Gid) uid=1000, gid=1000
Chtimes
changes the access and modification times of a file:root, _ := os.OpenRoot("data") mtime := time.Date(2020, 1, 1, 0, 0, 0, 0, time.UTC) atime := time.Now() root.Chtimes("01.txt", atime, mtime) finfo, _ := root.Stat("01.txt") fmt.Println(finfo.ModTime()) 2020-01-01 00:00:00 +0000 UTC
Link
creates a hard link to a file:root, _ := os.OpenRoot("data") root.Link("01.txt", "hardlink.txt") finfo, _ := root.Stat("hardlink.txt") fmt.Println(finfo.Name()) hardlink.txt
MkdirAll
creates a new directory and any parent directories that don't already exist:const dname = "path/to/secret" root, _ := os.OpenRoot("data") root.MkdirAll(dname, 0750) finfo, _ := root.Stat(dname) fmt.Println(dname, finfo.Mode()) path/to/secret drwxr-x---
RemoveAll
removes a file or a directory and any children that it contains:root, _ := os.OpenRoot("data") root.RemoveAll("01.txt") finfo, err := root.Stat("01.txt") fmt.Println(finfo, err) <nil> statat 01.txt: no such file or directory
Rename
renames (moves) a file or a directory:const oldname = "01.txt" const newname = "go.txt" root, _ := os.OpenRoot("data") root.Rename(oldname, newname) _, err := root.Stat(oldname) fmt.Println(err) finfo, _ := root.Stat(newname) fmt.Println(finfo.Name()) statat 01.txt: no such file or directory go.txt
Symlink
creates a symbolic link to a file.Readlink
returns the destination of the symbolic link:const lname = "symlink.txt" root, _ := os.OpenRoot("data") root.Symlink("01.txt", lname) lpath, _ := root.Readlink(lname) fmt.Println(lname, "->", lpath) symlink.txt -> 01.txt
WriteFile
writes data to a file, creating it if necessary.ReadFile
reads the file and returns its contents:const fname = "go.txt" root, _ := os.OpenRoot("data") root.WriteFile(fname, []byte("go is awesome"), 0644) content, _ := root.ReadFile(fname) fmt.Printf("%s: %s\n", fname, content) go.txt: go is awesome
Since there are now plenty of
os.Root
methods, you probably don't need the file-relatedos
functions most of the time. This can make working with files much safer.Speaking of file systems, the ones returned by
os.DirFS()
(a file system rooted at the given directory) andos.Root.FS()
(a file system for the tree of files in the root) both implement the newfs.ReadLinkFS
interface. It has two methods βReadLink
andLstat
.ReadLink
returns the destination of the symbolic link:const lname = "symlink.txt" root, _ := os.OpenRoot("data") root.Symlink("01.txt", lname) fsys := root.FS().(fs.ReadLinkFS) lpath, _ := fsys.ReadLink(lname) fmt.Println(lname, "->", lpath) symlink.txt -> 01.txt
I have to say, the inconsistent naming between
os.Root.Readlink
andfs.ReadLinkFS.ReadLink
is quite surprising. Maybe it's not too late to fix?Lstat
returns the information about a file or a symbolic link:fsys := os.DirFS("data").(fs.ReadLinkFS) finfo, _ := fsys.Lstat("01.txt") fmt.Printf("name: %s\n", finfo.Name()) fmt.Printf("size: %dB\n", finfo.Size()) fmt.Printf("mode: %s\n", finfo.Mode()) fmt.Printf("mtime: %s\n", finfo.ModTime().Format(time.DateOnly)) name: 01.txt size: 11B mode: -rw-r--r-- mtime: 2025-06-22
That's it for the
os
package!π£ 49580, 67002, 73126 β’ ππ 645718, 648295, 649515, 649536, 658995, 659416, 659757, 660635, 661595, 674116, 674315, 676135
# Reflective type
assertion
To convert a
reflect.Value
back to a specific type, you typically use theValue.Interface()
method combined with a type assertion:alice := &Person{"Alice", 25} // Given a reflection Value... aliceVal := reflect.ValueOf(alice).Elem() // ...convert it back to the Person type. person, _ := aliceVal.Interface().(Person) fmt.Printf("Name: %s, Age: %d\n", person.Name, person.Age) Name: Alice, Age: 25
Now you can use the new generic
reflect.TypeAssert
function instead:alice := &Person{"Alice", 25} // Given a reflection Value... aliceVal := reflect.ValueOf(alice).Elem() // ...convert it back to the Person type. person, _ := reflect.TypeAssert[Person](aliceVal) fmt.Printf("Name: %s, Age: %d\n", person.Name, person.Age) Name: Alice, Age: 25
It's more idiomatic and avoids unnecessary memory allocations, since the value is never boxed in an interface.
π£ 62121 β’ ππ 648056
# Test
attributes and friends
With the new
T.Attr
method, you can add extra test information, like a link to an issue, a test description, or anything else you need to analyze the test results:func TestAttrs(t *testing.T) { t.Attr("issue", "demo-1234") t.Attr("description", "Testing for the impossible") if 21*2 != 42 { t.Fatal("What in the world happened to math?") } } === RUN TestAttrs === ATTR TestAttrs issue demo-1234 === ATTR TestAttrs description Testing for the impossible --- PASS: TestAttrs (0.00s)
Attributes can be especially useful in JSON format if you send the test output to a CI or other system for automatic processing:
go1.25rc1 test -json -run=. ... { "Time":"2025-06-25T20:34:16.831401+00:00", "Action":"attr", "Package":"sandbox", "Test":"TestAttrs", "Key":"issue", "Value":"demo-1234" } ... { "Time":"2025-06-25T20:34:16.831415+00:00", "Action":"attr", "Package":"sandbox", "Test":"TestAttrs", "Key":"description", "Value":"Testing for the impossible" } ...
The output is formatted to make it easier to read.
The same
Attr
method is also available ontesting.B
andtesting.F
.π£ 43936 β’ ππ 662437
The new
T.Output
method lets you access the output stream (io.Writer
) used by the test. This can be helpful if you want to send your application log to the test log stream, making it easier to read or analyze automatically:func TestLog(t *testing.T) { t.Log("test message 1") t.Log("test message 2") appLog := slog.New(slog.NewTextHandler(t.Output(), nil)) appLog.Info("app message") } === RUN TestLog main_test.go:12: test message 1 main_test.go:13: test message 2 time=2025-06-25T16:14:34.085Z level=INFO msg="app message" --- PASS: TestLog (0.00s)
The same
Output
method is also available ontesting.B
andtesting.F
.π£ 59928 β’ ππ 672395, 677875
Last but not least, the
testing.AllocsPerRun
function will now panic if parallel tests are running.Compare 1.24 behavior:
// go 1.24 func TestAllocs(t *testing.T) { t.Parallel() allocs := testing.AllocsPerRun(100, func() { var s []int // Do some allocations. for i := range 1024 { s = append(s, i) } }) t.Log("Allocations per run:", allocs) } === RUN TestAllocs === PAUSE TestAllocs === CONT TestAllocs main_test.go:21: Allocations per run: 12 --- PASS: TestAllocs (0.00s)
To 1.25:
// go 1.25 func TestAllocs(t *testing.T) { t.Parallel() allocs := testing.AllocsPerRun(100, func() { var s []int // Do some allocations. for i := range 1024 { s = append(s, i) } }) t.Log("Allocations per run:", allocs) } === RUN TestAllocs === PAUSE TestAllocs === CONT TestAllocs --- FAIL: TestAllocs (0.00s) panic: testing: AllocsPerRun called during parallel test [recovered, repanicked]
The thing is, the result of
AllocsPerRun
is inherently flaky if other tests are running in parallel. That's why there's a new panicking behavior β it should help catch these kinds of bugs.π£ 70464 β’ ππ 630137
# Grouped
attributes for logging
With structured logging, you often group related attributes under a single key:
logger.Info("deposit", slog.Bool("ok", true), slog.Group("amount", slog.Int("value", 1000), slog.String("currency", "USD"), ), ) msg=deposit ok=true amount.value=1000 amount.currency=USD
It works just fine β unless you want to gather the attributes first:
attrs := []slog.Attr{ slog.Int("value", 1000), slog.String("currency", "USD"), } logger.Info("deposit", slog.Bool("ok", true), slog.Group("amount", attrs...), ) cannot use attrs (variable of type []slog.Attr) as []any value in argument to slog.Group (exit status 1)
slog.Group
expects a slice ofany
values, so it doesn't accept a slice ofslog.Attr
s.The new
slog.GroupAttrs
function fixes this issue by creating a group from the givenslog.Attr
s:attrs := []slog.Attr{ slog.Int("value", 1000), slog.String("currency", "USD"), } logger.Info("deposit", slog.Bool("ok", true), slog.GroupAttrs("amount", attrs...), ) msg=deposit ok=true amount.value=1000 amount.currency=USD
Not a big deal, but can be quite handy.
π£ 66365 β’ ππ 672915
# Hash cloner
The new
hash.Cloner
interface defines a hash function that can return a copy of its current state:// https://github.com/golang/go/blob/master/src/hash/hash.go type Cloner interface { Hash Clone() (Cloner, error) }
All standard library
hash.Hash
implementations now provide theClone
function, including MD5, SHA-1, SHA-3, FNV-1, CRC-64, and others:h1 := sha3.New256() h1.Write([]byte("hello")) clone, _ := h1.Clone() h2 := clone.(*sha3.SHA3) // h2 has the same state as h1, so it will produce // the same hash after writing the same data. h1.Write([]byte("world")) h2.Write([]byte("world")) fmt.Printf("h1: %x\n", h1.Sum(nil)) fmt.Printf("h2: %x\n", h2.Sum(nil)) fmt.Printf("h1 == h2: %t\n", reflect.DeepEqual(h1, h2)) h1: 92dad9443e4dd6d70a7f11872101ebff87e21798e4fbb26fa4bf590eb440e71b h2: 92dad9443e4dd6d70a7f11872101ebff87e21798e4fbb26fa4bf590eb440e71b h1 == h2: true
π£ 69521 β’ ππ 675197
# Final thoughts
Go 1.25 finalizes support for testing concurrent code, introduces a major experimental JSON package, and improves the runtime with a new GOMAXPROCS design and garbage collector. It also adds a flight recorder, modern CSRF protection, a long-awaited wait group shortcut, and several other improvements.
All in all, a great release!
P.S. To catch up on other Go releases, check out the Go features by version list or explore the interactive tours for Go 1.24 and 1.23.
P.P.S. Want to learn more about Go? Check out my interactive book on concurrency
-
π Rust Blog Announcing Rust 1.88.0 rss
The Rust team is happy to announce a new version of Rust, 1.88.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via
rustup
, you can get 1.88.0 with:$ rustup update stable
If you don't have it already, you can get
rustup
from the appropriate page on our website, and check out the detailed release notes for 1.88.0.If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (
rustup default beta
) or the nightly channel (rustup default nightly
). Please report any bugs you might come across What's in 1.88.0 stable
Let
chains
This feature allows
&&
-chaininglet
statements insideif
andwhile
conditions, even intermingling with boolean expressions, so there is less distinction betweenif
/if let
andwhile
/while let
. The patterns inside thelet
sub-expressions can be irrefutable or refutable, and bindings are usable in later parts of the chain as well as the body.For example, this snippet combines multiple conditions which would have required nesting
if let
andif
blocks before:if let Channel::Stable(v) = release_info() && let Semver { major, minor, .. } = v && major == 1 && minor == 88 { println!("`let_chains` was stabilized in this version"); }
Let chains are only available in the Rust 2024 edition, as this feature depends on the
if let
temporary scope change for more consistent drop order.Earlier efforts tried to work with all editions, but some difficult edge cases threatened the integrity of the implementation. 2024 made it feasible, so please upgrade your crate's edition if you'd like to use this feature!
Naked functions
Rust now supports writing naked functions with no compiler-generated epilogue and prologue, allowing full control over the generated assembly for a particular function. This is a more ergonomic alternative to defining functions in a
global_asm!
block. A naked function is marked with the#[unsafe(naked)]
attribute, and its body consists of a singlenaked_asm!
call.For example:
#[unsafe(naked)] pub unsafe extern "sysv64" fn wrapping_add(a: u64, b: u64) -> u64 { // Equivalent to `a.wrapping_add(b)`. core::arch::naked_asm!( "lea rax, [rdi + rsi]", "ret" ); }
The handwritten assembly block defines the entire function body: unlike non- naked functions, the compiler does not add any special handling for arguments or return values. Naked functions are used in low-level settings like Rust's
compiler-builtins
, operating systems, and embedded applications.Look for a more detailed post on this soon!
[](https://blog.rust-lang.org/2025/06/26/Rust-1.88.0/#boolean-
configuration) Boolean configuration
The
cfg
predicate language now supports boolean literals,true
andfalse
, acting as a configuration that is always enabled or disabled, respectively. This works in Rust conditional compilation withcfg
andcfg_attr
attributes and the built-incfg!
macro, and also in Cargo[target]
tables in both configuration and manifests.Previously, empty predicate lists could be used for unconditional configuration, like
cfg(all())
for enabled andcfg(any())
for disabled, but this meaning is rather implicit and easy to get backwards.cfg(true)
andcfg(false)
offer a more direct way to say what you mean.See RFC 3695 for more background!
[](https://blog.rust-lang.org/2025/06/26/Rust-1.88.0/#cargo-automatic-
cache-cleaning) Cargo automatic cache cleaning
Starting in 1.88.0, Cargo will automatically run garbage collection on the cache in its home directory!
When building, Cargo downloads and caches crates needed as dependencies. Historically, these downloaded files would never be cleaned up, leading to an unbounded amount of disk usage in Cargo's home directory. In this version, Cargo introduces a garbage collection mechanism to automatically clean up old files (e.g.
.crate
files). Cargo will remove files downloaded from the network if not accessed in 3 months, and files obtained from the local system if not accessed in 1 month. Note that this automatic garbage collection will not take place if running offline (using--offline
or--frozen
).Cargo 1.78 and newer track the access information needed for this garbage collection. This was introduced well before the actual cleanup that's starting now, in order to reduce cache churn for those that still use prior versions. If you regularly use versions of Cargo even older than 1.78, in addition to running current versions of Cargo, and you expect to have some crates accessed exclusively by the older versions of Cargo and don't want to re-download those crates every ~3 months, you may wish to set
cache.auto-clean-frequency = "never"
in the Cargo configuration, as described in the docs.For more information, see the original unstable announcement of this feature. Some parts of that design remain unstable, like the
gc
subcommand tracked in cargo#13060, so there's still more to look forward to!Stabilized APIs
Cell::update
impl Default for *const T
impl Default for *mut T
mod ffi::c_str
HashMap::extract_if
HashSet::extract_if
hint::select_unpredictable
proc_macro::Span::line
proc_macro::Span::column
proc_macro::Span::start
proc_macro::Span::end
proc_macro::Span::file
proc_macro::Span::local_file
<[T]>::as_chunks
<[T]>::as_rchunks
<[T]>::as_chunks_unchecked
<[T]>::as_chunks_mut
<[T]>::as_rchunks_mut
<[T]>::as_chunks_unchecked_mut
These previously stable APIs are now stable in const contexts:
NonNull<T>::replace
<*mut T>::replace
std::ptr::swap_nonoverlapping
Cell::replace
Cell::get
Cell::get_mut
Cell::from_mut
Cell::as_slice_of_cells
Other
changes
The
i686-pc-windows-gnu
target has been demoted to Tier 2, as mentioned in an earlier post. This won't have any immediate effect for users, since both the compiler and standard library tools will still be distributed byrustup
for this target. However, with less testing than it had at Tier 1, it has more chance of accumulating bugs in the future.Check out everything that changed in Rust, Cargo, and Clippy.
[](https://blog.rust-lang.org/2025/06/26/Rust-1.88.0/#contributors-
to-1-88-0) Contributors to 1.88.0
Many people came together to create Rust 1.88.0. We couldn't have done it without all of you. Thanks!
-
π matklad RSS Server Side Reader rss
RSS Server Side Reader Jun 26, 2025
I like the idea of RSS, but none of the RSS readers stuck with me, until I implemented one of my own, using a somewhat unusual technique. Thereβs at least one other person using this approach now, so letβs write this down.
About RSS
Let me start with a quick rundown of RSS, as the topic can be somewhat confusing. I am by no means an expert; my perspective is amateur.
The purpose of RSS is to allow blog authors to inform the readers when a new post comes out. It is, first and foremost, a notification mechanism. The way it works is that a blog publishes a machine readable list of recent blog entries. An RSS reader fetches this list, persists the list in the local state, and periodically polls the original site for changes. If a new article is published, the reader notices it on the next poll and notifies the user.
RSS is an alternative to Twitter- and HackerNews-likes for discovering interesting articles. Rather than relying on word of mouth popularity or algorithmic feeds, you curate your own list of favorite authors, and use a feed reader in lieu of compulsively checking personal websites for updates directly.
There are several specific standards that implement the general feed idea. The original one is RSS. It is a bad, ambiguous, and overly complicated standard. Donβt use it. Instead, use Atom, a much clearer and simpler standard. Whenever βRSSβ is discussed (as in the present article), Atom is usually implied. See Atom vs. RSS for a more specific discussion on the differences.
While simpler, Atom is not simple. A big source of accidental complexity is that Atom feed is an XML document that needs to embed HTML. HTML being almost like XML, it is easy to mess up escaping. The actually good, simple standard is JSON Feed. However, it appears to be completely unmaintained as of 2025. This is very unfortunate. I hope someone takes over the maintenance from the original creators.
[About Feed Readers
](https://matklad.github.io/2025/06/26/rssssr.html#About-Feed-Readers)
As Iβve mentioned, while I like the ideas behind RSS, none of the existing RSS readers worked for me. They try to do more than I need. A classical RSS reader fetches full content of the articles, saves it for offline reading and renders the content using an embedded web-browser. I donβt need this. I prefer reading the articles on the authorβs website, using my normal browser (and, occasionally, its reader mode). The only thing I need is notifications.
[What Didnβt Work: Client Side Reader
](https://matklad.github.io/2025/06/26/rssssr.html#What-Didn't-Work:-Client- Side-Reader)
My first attempt at my own RSS reader was to create a web page that stored the state in the browserβs local storage. This idea was foiled by CORS. In general, if a client-side JavaScript does a
fetch
it can only fetch resources from the domain the page itself is hosted on. But feeds are hosted on other domains.[What Did Work: SSR
](https://matklad.github.io/2025/06/26/rssssr.html#What-Did-Work:-SSR)
I have a blog. You are reading it. I now build my personalized feed as a part of this blogβs build process. It is hosted at https://matklad.github.io/blogroll.html
This list is stateless: for each feed I follow, I display the latest three posts, newer posts on top. I donβt maintain read/unread state. If I donβt remember whether I read the article or not, I might as well re-read! I can access this list from any device.
While it is primarily for me, the list is publicly available, and might be interesting for some readers of my blog. Hopefully, it also helps to page-rank the blogs I follow!
The source of truth is the blogroll.txt. It is a simple list of links, with one link per line. Originally, I tried using OPML, but it is far too complicated for what I need here, and is actively inconvenient to modify by hand.
Hereβs the entire code to fetch the blogroll, using this library:
// deno-lint-ignore-file no-explicit-any import { parseFeed } from "@rss"; export interface FeedEntry { title: string; url: string; date: Date; } export async function blogroll(): Promise<FeedEntry[]> { const urls = (await Deno.readTextFile("content/blogroll.txt")) .split("\n").filter((line) => line.trim().length > 0); const all_entries = (await Promise.all(urls.map(blogroll_feed))).flat(); all_entries.sort((a, b) => b.date.getTime() - a.date.getTime()); return all_entries; } async function blogroll_feed( url: string ): Promise<FeedEntry[]> { let feed; try { const response = await fetch(url); const xml = await response.text(); feed = await parseFeed(xml); } catch (error) { console.error({ url, error }); return []; } return feed.entries.map((entry: any) => { return { title: entry.title!.value!, url: (entry.links.find((it: any) => { it.type == "text/html" || it.href!.endsWith(".html"); }) ?? entry.links[0])!.href!, date: (entry.published ?? entry.updated)!, }; }).slice(0, 3); }
And this is how the data is converted to HTML during build process:
export function BlogRoll( { posts }: { posts: FeedEntry[] } ) { function domain(url: string): string { return new URL(url).host; } const list_items = posts.map((post) => ( <li> <h2> <span class="meta"> <Time date={post.date} />, {domain(post.url)} </span> <a href={post.url}>{post.title}</a> </h2> </li> )); return ( <Base> <ul class="post-list"> {list_items} </ul> </Base> ); }
GitHub actions re-builds blogroll every midnight:
name: CI on: push: branches: - master schedule: - cron: "0 0 * * *" # Daily at midnight. jobs: CI: runs-on: ubuntu-latest permissions: pages: write id-token: write steps: - uses: actions/checkout@v4 - uses: denoland/setup-deno@v2 with: deno-version: v2.x - run: deno task build --blogroll - uses: actions/upload-pages-artifact@v3 with: path: ./out/www - uses: actions/deploy-pages@v4
Links
Tangentially related, another pattern is to maintain a list of all-times favorite links:
-
π Console.dev newsletter Automatisch rss
Description: Self-hosted Zapier alternative.
What we like: Connect steps into workflows. Many integrations e.g. Airtable, Discord, GitHub, GCal, OpenAI, YouTube. Entirely self-hosted (partially open source). Triggers flow into one or more actions. Will run periodically (every 15 mins) or whenever a trigger receives an event. Easily create your own integrations in JS.
What we dislike: Mixed licensing with enterprise version. Not considered stable v1 yet.
-
π Console.dev newsletter Glance rss
Description: Personal dashboard.
What we like: Self-host a dashboard with customizable widgets e.g. RSS feeds, Subreddit, weather, server stats, sparklines, call APIs. Everything is set up via one or many config files. Themeable. Shipped as a single (Go) binary and can run as a simple container.
What we dislike: The page loads once and not all widgets refresh dynamically - no requests are made in the background, so you need to refresh the page.
-
π Julia Evans New zine: The Secret Rules of the Terminal rss
Hello! After many months of writing deep dive blog posts about the terminal, on Tuesday I released a new zine called "The Secret Rules of the Terminal"!
You can get it for $12 here: https://wizardzines.com/zines/terminal, or get an 15-pack of all my zines here.
Here's the cover:
the table of contents
Here's the table of contents:
why the terminal?
I've been using the terminal every day for 20 years but even though I'm very confident in the terminal, I've always had a bit of an uneasy feeling about it. Usually things work fine, but sometimes something goes wrong and it just feels like investigating it is impossible, or at least like it would open up a huge can of worms.
So I started trying to write down a list of weird problems I've run into in terminal and I realized that the terminal has a lot of tiny inconsistencies like:
- sometimes you can use the arrow keys to move around, but sometimes pressing the arrow keys just prints
^[[D
- sometimes you can use the mouse to select text, but sometimes you can't
- sometimes your commands get saved to a history when you run them, and sometimes they don't
- some shells let you use the up arrow to see the previous command, and some don't
If you use the terminal daily for 10 or 20 years, even if you don't understand exactly why these things happen, you'll probably build an intuition for them.
But having an intuition for them isn't the same as understanding why they happen. When writing this zine I actually had to do a lot of work to figure out exactly what was happening in the terminal to be able to talk about how to reason about it.
the rules aren't written down anywhere
It turns out that the "rules" for how the terminal works (how do you edit a command you type in? how do you quit a program? how do you fix your colours?) are extremely hard to fully understand, because "the terminal" is actually made of many different pieces of software (your terminal emulator, your operating system, your shell, the core utilities like
grep
, and every other random terminal program you've installed) which are written by different people with different ideas about how things should work.So I wanted to write something that would explain:
- how the 4 pieces of the terminal (your shell, terminal emulator, programs, and TTY driver) fit together to make everything work
- some of the core conventions for how you can expect things in your terminal to work
- lots of tips and tricks for how to use terminal programs
this zine explains the most useful parts of terminal internals
Terminal internals are a mess. A lot of it is just the way it is because someone made a decision in the 80s and now it's impossible to change, and honestly I don't think learning everything about terminal internals is worth it.
But some parts are not that hard to understand and can really make your experience in the terminal better, like:
- if you understand what your shell is responsible for, you can configure your shell (or use a different one!) to access your history more easily, get great tab completion, and so much more
- if you understand escape codes , it's much less scary when
cat
ing a binary to stdout messes up your terminal, you can just typereset
and move on - if you understand how colour works, you can get rid of bad colour contrast in your terminal so you can actually read the text
I learned a surprising amount writing this zine
When I wrote How Git Works, I thought I knew how Git worked, and I was right. But the terminal is different. Even though I feel totally confident in the terminal and even though I've used it every day for 20 years, I had a lot of misunderstandings about how the terminal works and (unless you're the author of
tmux
or something) I think there's a good chance you do too.A few things I learned that are actually useful to me:
- I understand the structure of the terminal better and so I feel more confident debugging weird terminal stuff that happens to me (I was even able to suggest a small improvement to fish!). Identifying exactly which piece of software is causing a weird thing to happen in my terminal still isn't easy but I'm a lot better at it now.
- you can write a shell script to copy to your clipboard over SSH
- how
reset
works under the hood (it does the equivalent ofstty sane; sleep 1; tput reset
) β basically I learned that I don't ever need to worry about rememberingstty sane
ortput reset
and I can just runreset
instead - how to look at the invisible escape codes that a program is printing out (run
unbuffer program > out; less out
) - why the builtin REPLs on my Mac like
sqlite3
are so annoying to use (they uselibedit
instead ofreadline
)
blog posts I wrote along the way
As usual these days I wrote a bunch of blog posts about various side quests:
- How to add a directory to your PATH
- "rules" that terminal problems follow
- why pipes sometimes get "stuck": buffering
- some terminal frustrations
- ASCII control characters in my terminal on "what's the deal with Ctrl+A, Ctrl+B, Ctrl+C, etc?"
- entering text in the terminal is complicated
- what's involved in getting a "modern" terminal setup?
- reasons to use your shell's job control
- standards for ANSI escape codes, which is really me trying to figure out if I think the
terminfo
database is serving us well today
people who helped with this zine
A long time ago I used to write zines mostly by myself but with every project I get more and more help. I met with Marie Claire LeBlanc Flanagan every weekday from September to June to work on this one.
The cover is by Vladimir KaΕ‘ikoviΔ, Lesley Trites did copy editing, Simon Tatham (who wrote PuTTY) did technical review, our Operations Manager Lee did the transcription as well as a million other things, and Jesse Luehrs (who is one of the very few people I know who actually understands the terminal's cursed inner workings) had so many incredibly helpful conversations with me about what is going on in the terminal.
get the zine
Here are some links to get the zine again:
As always, you can get either a PDF version to print at home or a print version shipped to your house. The only caveat is print orders will ship in August - I need to wait for orders to come in to get an idea of how many I should print before sending it to the printer.
- sometimes you can use the arrow keys to move around, but sometimes pressing the arrow keys just prints
-
- June 25, 2025
-
π google/adk-python v1.5.0 release
1.5.0
(2025-06-25)
Features
- Add a new option
eval_storage_uri
in adk web & adk eval to specify GCS bucket to store eval data (fa025d7) - Add ADK examples for litellm with add_function_to_prompt (f33e090)
- Add implementation of VertexAiMemoryBankService and support in FastAPI endpoint (abc89d2)
- Add rouge_score library to ADK eval dependencies, and implement RougeEvaluator that is computes ROUGE-1 for "response_match_score" metric (9597a44)
- Add usage span attributes to telemetry (#356) (ea69c90)
- Add Vertex Express mode compatibility for VertexAiSessionService (00cc8cd)
Bug Fixes
- Include current turn context when include_contents='none' (9e473e0)
- Make LiteLLM streaming truly asynchronous (bd67e84)
- Make raw_auth_credential and exchanged_auth_credential optional given their default value is None (acbdca0)
- Minor typo fix in the agent instruction (ef3c745)
- Typo fix in sample agent instruction (ef3c745)
- Update contributing links (a1e1441)
- Use starred tuple unpacking on GCS artifact blob names (3b1d9a8)
Chore
- Do not send api request when session does not have events (88a4402)
- Leverage official uv action for install(09f1269)
- Update google-genai package and related deps to latest(ed7a21e)
- Add credential service backed by session state(29cd183)
- Clarify the behavior of Event.invocation_id(f033e40)
- Send user message to the agent that returned a corresponding function call if user message is a function response(7c670f6)
- Add request converter to convert a2a request to ADK request(fb13963)
- Support allow_origins in cloud_run deployment (2fd8feb)
- Add a new option
-
π Luke Muehlhauser Useful prompts for getting music recommendations from AI models rss
Today's best AI "large language models" are especially good at tasks where hallucinations and other failures are low-stakes, such as making music recommendations. Here is continuously-updated doc of music search prompts I've found helpful, a few examples below:
Which compositions by [composer] are most popular and/or highly regarded today? Please list 20 of their compositions in a table, along with the evidence that they are popular and/or highly regarded, and direct Spotify links to complete recordings where available (but it's fine to list many compositions which aren't available on Spotify).
**
What are 20 of the most celebrated covers or re-interpretation of compositions by [musical artist]? Please list each one in a table along with the performer(s), the release year of the cover/re-interpretation, a few notes about what is changed from the original, and a direct link to the piece on Spotify (where available).
**
Who are the top 20 most highly respected composers of [genre]? Please list them in a table, and for each composer please list 4 of their most celebrated works/albums.
**
Who are the top 20 contemporary classical composers who (a) mainly write notated music for performance by classically trained musicians, but who (b) frequently draw inspiration from popular electronic music genres in their compositions (e.g. breakbeat, drum and bass, dubstep, electro, EDM, grime, IDM, techno, or trance)? Please list them in a table with four columns: Composer, Example 1, Example 2, Example 3. Each of the three examples should be a composition by that composer which draws inspiration from popular electronic music genres, along with a short description of how, exactly, it draws inspiration from popular electronic music genres. Where possible, please link each listed composition title to a recording of the piece on Spotify (but it's fine to list many compositions which aren't available on Spotify).
Please don't include any of the following composers, because I'm already familiar with them: [list of composers]
**
What are 20 of the most creatively unique and ambitious [genre] albums/compositions after 2000 that are also relatively "tonal" and "accessible" to the general listener? Please list each album/composition in a table along with its composer, release year, a paragraph describing what is so creatively ambitious about it, and a direct link to a complete recording of the piece on Spotify (where available). Please also limit yourself to one album/composition per musical artist.
-
π @trailofbits@infosec.exchange Did you know the biggest cause of crypto hacks in 2024 goes entirely unnoticed mastodon
Did you know the biggest cause of crypto hacks in 2024 goes entirely unnoticed by most security audits? This attack vector was responsible for 43% of the crypto stolen in 2024, and isn't eligible as a finding in audit contests or most audit engagements
Answer: Private key compromise.
In this post, you'll learn how to make protocols resilient to private key leaks using our 4-level framework: https://blog.trailofbits.com/2025/06/25/maturing-your-smart-contracts-beyond- private-key-risk/
-
π astral-sh/uv 0.7.15 release
Release Notes
Enhancements
- Consistently use
Ordering::Relaxed
for standalone atomic use cases (#14190) - Warn on ambiguous relative paths for
--index
(#14152) - Skip GitHub fast path when rate-limited (#13033)
- Preserve newlines in
schema.json
descriptions (#13693)
Bug fixes
- Add check for using minor version link when creating a venv on Windows (#14252)
- Strip query parameters when parsing source URL (#14224)
Documentation
- Add a link to PyPI FAQ to clarify what per-project token is (#14242)
Preview features
- Allow symlinks in the build backend (#14212)
Install uv 0.7.15
Install prebuilt binaries via shell script
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/astral-sh/uv/releases/download/0.7.15/uv-installer.sh | sh
Install prebuilt binaries via powershell script
powershell -ExecutionPolicy Bypass -c "irm https://github.com/astral-sh/uv/releases/download/0.7.15/uv-installer.ps1 | iex"
Download uv 0.7.15
File | Platform | Checksum
---|---|---
uv-aarch64-apple-darwin.tar.gz | Apple Silicon macOS | checksum
uv-x86_64-apple-darwin.tar.gz | Intel macOS | checksum
uv-aarch64-pc-windows-msvc.zip | ARM64 Windows | checksum
uv-i686-pc-windows-msvc.zip | x86 Windows | checksum
uv-x86_64-pc-windows-msvc.zip | x64 Windows | checksum
uv-aarch64-unknown-linux-gnu.tar.gz | ARM64 Linux | checksum
uv-i686-unknown-linux-gnu.tar.gz | x86 Linux | checksum
uv-powerpc64-unknown-linux-gnu.tar.gz | PPC64 Linux | checksum
uv-powerpc64le-unknown-linux-gnu.tar.gz | PPC64LE Linux | checksum
uv-riscv64gc-unknown-linux-gnu.tar.gz | RISCV Linux | checksum
uv-s390x-unknown-linux-gnu.tar.gz | S390x Linux | checksum
uv-x86_64-unknown-linux-gnu.tar.gz | x64 Linux | checksum
uv-armv7-unknown-linux-gnueabihf.tar.gz | ARMv7 Linux | checksum
uv-aarch64-unknown-linux-musl.tar.gz | ARM64 MUSL Linux | checksum
uv-i686-unknown-linux-musl.tar.gz | x86 MUSL Linux | checksum
uv-x86_64-unknown-linux-musl.tar.gz | x64 MUSL Linux | checksum
uv-arm-unknown-linux-musleabihf.tar.gz | ARMv6 MUSL Linux (Hardfloat) | checksum
uv-armv7-unknown-linux-musleabihf.tar.gz | ARMv7 MUSL Linux | checksum - Consistently use
-
- June 24, 2025
-
π @HexRaysSA@infosec.exchange ππ½ Check out this in-depth video of mastodon
ππ½ Check out this in-depth video of @nmatt0 reversing the firmware decryption mechanism used in a Hanwha security camera with IDA Pro. Bonus: He's also written an accompanying blog post packed with code samples, screenshots, and more!
https://hex-rays.com/blog/reversing-hanwha-security-cameras-a-deep-dive-by- matt-brown
-
π @binaryninja@infosec.exchange Long time, no stream! Join Jordan and several other Binary Ninjas to see the mastodon
Long time, no stream! Join Jordan and several other Binary Ninjas to see the next batch of features coming to stable release 5.1!
Going live in 1 hr today, June 24th at 5pm ET:
-
π News Minimalist Canada and EU sign security agreement + 10 more stories rss
In the last 2 days ChatGPT read 55460 top news stories. After removing previously covered events, there are 11 articles with a significance score over 5.9.
[5.9] Canada and EU sign security and defense agreement βenglish.elpais.com(+14)
The EU and Canada signed a security and defense agreement on Monday, strengthening their partnership. The agreement is a response to geopolitical uncertainty and strained relations with the United States.
The agreement allows Canadian companies to participate in EU arms procurement programs funded by a β¬150 billion joint fund.
Beyond defense, the EU and Canada plan to negotiate a digital and tech agreement. This agreement will cover AI innovation, high-performance computing, and strategic technology research, while also aiming to provide the EU with access to Canadian raw materials.
[5.6] Tesla launches driverless robotaxis in Austin, Texas βsmh.com.au(+83)
Tesla's robotaxi service debuted in Austin, Texas, marking a significant step into autonomous driving after years of anticipation and hype from CEO Elon Musk.
The initial launch involves a small fleet of vehicles and hand-picked users, with rides costing $4.20. The service is limited to a specific area and operating hours, with a safety monitor present during early access.
This launch is crucial for Tesla, as the company faces flagging sales and investor pressure, with the future of the company heavily reliant on autonomous driving technology.
[5.6] UK may compel Google to change search rankings βreuters.com[$] (+10)
The UK's Competition and Markets Authority is considering tighter control over Google's search services by designating it with "strategic market status." This gives the regulator increased oversight over Google's operations in the UK.
The CMA aims to ensure fair ranking for businesses on Google search, improve user access to alternative search engines, and provide transparency for publishers. The designation, expected in October, marks the first such action since the CMA gained new powers this year.
Google expressed concern over the broad scope of the CMA's considerations. The company believes the proposed interventions lack sufficient evidence and could negatively impact both businesses and consumers in Britain.
Highly covered news with significance over 5.5
[6.5] Israel and Iran agree to a ceasefire β cnbc.com (+2940)
[6.4] UK pledges 5% of GDP for security β bbc.com (+165)
[6.1] New test predicts chemotherapy resistance, aims to reduce side effects β nzherald.co.nz (+11)
[5.9] US judge stops Trump's Harvard international student ban β theguardian.com (+18)
[5.5] US House bans WhatsApp on government devices β ft.com [$] (+21)
[5.5] US Supreme Court lets Trump deport migrants to third countries β apnews.com (+52)
[6.0] ex-OpenAI CTOβs startup Thinking Machines Lab raises $2 billion at $10 billion valuation β thehindu.com (+3)
[5.6] Scientists use bacteria to turn plastic waste into paracetamol β theguardian.com (+5)
Thanks for reading!
β Vadim
You can create your own personal newsletter like this with premium.
-