- β
- β
- Buy eSIMs for international travel - Airalo
- How to know when it's time to go β Bitfield Consulting
- Careless Whisper - Mac Dictation App
- Ultrawide archaeology on Android native libraries - media.ccc.de
- I Ditched the Algorithm for RSSβand You Should Too - Joey's Hoard of Stuff
- January 23, 2025
-
π @trailofbits@infosec.exchange Our team submitted 750+ pull requests to improve 80+ open-source projects in mastodon
Our team submitted 750+ pull requests to improve 80+ open-source projects in 2024. These contributions strengthen critical security infrastructure, from foundational cryptography libraries to package managers that security engineers rely on daily.
Key contributions include:
β’ LLVM gained container overflow detection
β’ pwndbg received LLDB port and other new features
β’ hevm added Cancun opcodes,
β’ we implemented NIST-standardized post-quantum cryptography signature schemes in Rust and Go.https://blog.trailofbits.com/2025/01/23/celebrating-our-2024-open-source- contributions/
-
π r/reverseengineering Reversing and Reviving a Dead Spider-Man Game rss
submitted by /u/Classic_Aspect
[link] [comments] -
π astral-sh/uv 0.5.23 release
Release Notes
Enhancements
Bug fixes
- Sort extras and groups when comparing lockfile requirements (#10856)
- Include
commit_id
andrequested_revision
indirect_url.json
(#10862) - Invalidate lockfile when static versions change (#10858)
- Make GitHub fast path errors non-fatal (#10859)
- Remove warnings for
--frozen
and--locked
inuv run --script
(#10840) - Resolve
find-links
paths relative to the configuration file (#10827) - Respect visitation order for proxy packages (#10833)
- Treat version mismatch errors as non-fatal in fast paths (#10860)
- Mark
--locked
and--upgrade
are conflicting (#10836) - Relax error checking around unconditional enabling of conflicting extras (#10875)
Documentation
Error messages
- Error when workspace contains conflicting Python requirements (#10841)
- Improve uvx error message when uv is missing (#9745)
Install uv 0.5.23
Install prebuilt binaries via shell script
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/astral-sh/uv/releases/download/0.5.23/uv-installer.sh | sh
Install prebuilt binaries via powershell script
powershell -ExecutionPolicy ByPass -c "irm https://github.com/astral-sh/uv/releases/download/0.5.23/uv-installer.ps1 | iex"
Download uv 0.5.23
File | Platform | Checksum
---|---|---
uv-aarch64-apple-darwin.tar.gz | Apple Silicon macOS | checksum
uv-x86_64-apple-darwin.tar.gz | Intel macOS | checksum
uv-i686-pc-windows-msvc.zip | x86 Windows | checksum
uv-x86_64-pc-windows-msvc.zip | x64 Windows | checksum
uv-aarch64-unknown-linux-gnu.tar.gz | ARM64 Linux | checksum
uv-i686-unknown-linux-gnu.tar.gz | x86 Linux | checksum
uv-powerpc64-unknown-linux-gnu.tar.gz | PPC64 Linux | checksum
uv-powerpc64le-unknown-linux-gnu.tar.gz | PPC64LE Linux | checksum
uv-s390x-unknown-linux-gnu.tar.gz | S390x Linux | checksum
uv-x86_64-unknown-linux-gnu.tar.gz | x64 Linux | checksum
uv-armv7-unknown-linux-gnueabihf.tar.gz | ARMv7 Linux | checksum
uv-aarch64-unknown-linux-musl.tar.gz | ARM64 MUSL Linux | checksum
uv-i686-unknown-linux-musl.tar.gz | x86 MUSL Linux | checksum
uv-x86_64-unknown-linux-musl.tar.gz | x64 MUSL Linux | checksum
uv-arm-unknown-linux-musleabihf.tar.gz | ARMv6 MUSL Linux (Hardfloat) | checksum
uv-armv7-unknown-linux-musleabihf.tar.gz | ARMv7 MUSL Linux | checksum -
π Console.dev newsletter rqlite rss
Description: Distributed DB on top of SQLite.
What we like: Adds fault tolerance and high availability on top of SQLite. Supports queued async writes. Configurable read consistency. Easy to start a cluster which can configure itself automatically. Can automatically backup to S3 compatible storage.
What we dislike: Access must be via a custom HTTP API (or one of the client libraries) to get all the functionality.
-
π Console.dev newsletter neu rss
Description: Standards first web framework.
What we like: All content is Markdown. Uses HTML for layout rather than JS components. Built-in design system for styling. Minimal JS, so loads much faster. Uses islands for optional dynamic areas. Approach encourages progressive enhancement.
What we dislike: Although features for web applications are on the roadmap, this is a content-first approach. Has a small number of components available out of the box.
-
- January 22, 2025
-
π sacha chua :: living an awesome life Controlling my Android phone by voice rss
I want to be able to use voice control to do things on my phone while I'm busy washing dishes, putting things away, knitting, or just keeping my hands warm. It'll also be handy to have a way to get things out of my head when the kiddo is koala-ing me. I've been using my Google Pixel 8's voice interface to set timers, send text messages, and do quick web searches. Building on my recent thoughts on wearable computing, I decided to spend some more time investigating the Google Assistant and Voice Access features in Android and setting up other voice shortcuts.
Tasker routines
I switched back to Google Assistant from Gemini so that I could run Tasker routines. I also found out that I needed to switch the language from English/Canada to English/US in order for my Tasker scripts to run instead of Google Assistant treating them as web searches. Once that was sorted out, I could run Tasker tasks with "Hey Google, run {task-name} in Tasker" and parameterize them with "Hey Google, run {task-name} with {parameter} in Tasker."
Voice Access
Learning how to use Voice Access to navigate, click, and type on my phone was straightforward. "Scroll down" works for webpages, while "scroll right" works for the e-books I have in Libby. Tapping items by text usually works. When it doesn't, I can use "show labels", "show numbers", or "show grid." The speech-to-text of "type …" isn't as good as Whisper, so I probably won't use it for a lot of dictation, but it's fine for quick notes. I can keep recording in the background so that I have the raw audio in case I want to review it or grab the WhisperX transcripts instead.
For some reason, saying "Hey Google, voice access" to start up voice access has been leaving the Assistant dialog on the screen, which makes it difficult to interact with the screen I'm looking at. I added a Tasker routine to start voice access, wait a second, and tap on the screen to dismiss the Assistant dialog.
Start Voice.tsk.xml - Import via Taskernet
Start Voice.tsk.xml<TaskerData sr="" dvi="1" tv="6.3.13"> <Task sr="task24"> <cdate>1737565479418</cdate> <edate>1737566416661</edate> <id>24</id> <nme>Start Voice</nme> <pri>1000</pri> <Share sr="Share"> <b>false</b> <d>Start voice access and dismiss the assistant dialog</d> <g>Accessibility,AutoInput</g> <p>true</p> <t></t> </Share> <Action sr="act0" ve="7"> <code>20</code> <App sr="arg0"> <appClass>com.google.android.apps.accessibility.voiceaccess.LauncherActivity</appClass> <appPkg>com.google.android.apps.accessibility.voiceaccess</appPkg> <label>Voice Access</label> </App> <Str sr="arg1" ve="3"/> <Int sr="arg2" val="0"/> <Int sr="arg3" val="0"/> </Action> <Action sr="act1" ve="7"> <code>30</code> <Int sr="arg0" val="0"/> <Int sr="arg1" val="1"/> <Int sr="arg2" val="0"/> <Int sr="arg3" val="0"/> <Int sr="arg4" val="0"/> </Action> <Action sr="act2" ve="7"> <code>107361459</code> <Bundle sr="arg0"> <Vals sr="val"> <EnableDisableAccessibilityService><null></EnableDisableAccessibilityService> <EnableDisableAccessibilityService-type>java.lang.String</EnableDisableAccessibilityService-type> <Password><null></Password> <Password-type>java.lang.String</Password-type> <com.twofortyfouram.locale.intent.extra.BLURB>Actions To Perform: click(point,564\,1045) Not In AutoInput: true Not In Tasker: true Separator: , Check Millis: 1000</com.twofortyfouram.locale.intent.extra.BLURB> <com.twofortyfouram.locale.intent.extra.BLURB-type>java.lang.String</com.twofortyfouram.locale.intent.extra.BLURB-type> <net.dinglisch.android.tasker.JSON_ENCODED_KEYS>parameters</net.dinglisch.android.tasker.JSON_ENCODED_KEYS> <net.dinglisch.android.tasker.JSON_ENCODED_KEYS-type>java.lang.String</net.dinglisch.android.tasker.JSON_ENCODED_KEYS-type> <net.dinglisch.android.tasker.RELEVANT_VARIABLES><StringArray sr=""><_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES0>%ailastbounds Last Bounds Bounds (left,top,right,bottom) of the item that the action last interacted with</_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES0><_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES1>%ailastcoordinates Last Coordinates Center coordinates (x,y) of the item that the action last interacted with</_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES1><_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES2>%err Error Code Only available if you select &lt;b&gt;Continue Task After Error&lt;/b&gt; and the action ends in error</_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES2><_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES3>%errmsg Error Message Only available if you select &lt;b&gt;Continue Task After Error&lt;/b&gt; and the action ends in error</_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES3></StringArray></net.dinglisch.android.tasker.RELEVANT_VARIABLES> <net.dinglisch.android.tasker.RELEVANT_VARIABLES-type>[Ljava.lang.String;</net.dinglisch.android.tasker.RELEVANT_VARIABLES-type> <net.dinglisch.android.tasker.extras.VARIABLE_REPLACE_KEYS>parameters plugininstanceid plugintypeid </net.dinglisch.android.tasker.extras.VARIABLE_REPLACE_KEYS> <net.dinglisch.android.tasker.extras.VARIABLE_REPLACE_KEYS-type>java.lang.String</net.dinglisch.android.tasker.extras.VARIABLE_REPLACE_KEYS-type> <net.dinglisch.android.tasker.subbundled>true</net.dinglisch.android.tasker.subbundled> <net.dinglisch.android.tasker.subbundled-type>java.lang.Boolean</net.dinglisch.android.tasker.subbundled-type> <parameters>{"_action":"click(point,564\\,1045)","_additionalOptions":{"checkMs":"1000","separator":",","withCoordinates":false},"_whenToPerformAction":{"notInAutoInput":true,"notInTasker":true},"generatedValues":{}}</parameters> <parameters-type>java.lang.String</parameters-type> <plugininstanceid>b46b8afc-c840-40ad-9283-3946c57a1018</plugininstanceid> <plugininstanceid-type>java.lang.String</plugininstanceid-type> <plugintypeid>com.joaomgcd.autoinput.intent.IntentActionv2</plugintypeid> <plugintypeid-type>java.lang.String</plugintypeid-type> </Vals> </Bundle> <Str sr="arg1" ve="3">com.joaomgcd.autoinput</Str> <Str sr="arg2" ve="3">com.joaomgcd.autoinput.activity.ActivityConfigActionv2</Str> <Int sr="arg3" val="60"/> <Int sr="arg4" val="1"/> </Action> </Task> </TaskerData>
I can use "Hey Google, read aloud" to read a webpage. I can use "Hey Google, skip ahead 2 minutes" or "Hey Google, rewind 30 seconds." Not sure how I can navigate by text, though. It would be nice to get an overview of headings and then jump to the one I want, or search for text and continue from there.
Autoplay an emacs.tv video
I wanted to be able to play random emacs.tv videos without needing to touch my phone. I added autoplay support to the web interface so that you can open https://emacs.tv?autoplay=1 and have it autoplay videos when you select the next random one by clicking on the site logo, "Lucky pick", or the dice icon. The first video doesn't autoplay because YouTube requires user interaction in order to autoplay unmuted videos, but I can work around that with a Tasker script that loads the URL, waits a few seconds, and clicks on the heading with AutoInput.
Emacs TV.tsk.xml - Import via Taskernet
Emacs TV.tsk.xml<TaskerData sr="" dvi="1" tv="6.3.13"> <Task sr="task18"> <cdate>1737558964554</cdate> <edate>1737562488128</edate> <id>18</id> <nme>Emacs TV</nme> <pri>1000</pri> <Share sr="Share"> <b>false</b> <d>Play random Emacs video</d> <g>Watch</g> <p>true</p> <t></t> </Share> <Action sr="act0" ve="7"> <code>104</code> <Str sr="arg0" ve="3">https://emacs.tv?autoplay=1</Str> <App sr="arg1"/> <Int sr="arg2" val="0"/> <Str sr="arg3" ve="3"/> </Action> <Action sr="act1" ve="7"> <code>30</code> <Int sr="arg0" val="0"/> <Int sr="arg1" val="3"/> <Int sr="arg2" val="0"/> <Int sr="arg3" val="0"/> <Int sr="arg4" val="0"/> </Action> <Action sr="act2" ve="7"> <code>107361459</code> <Bundle sr="arg0"> <Vals sr="val"> <EnableDisableAccessibilityService><null></EnableDisableAccessibilityService> <EnableDisableAccessibilityService-type>java.lang.String</EnableDisableAccessibilityService-type> <Password><null></Password> <Password-type>java.lang.String</Password-type> <com.twofortyfouram.locale.intent.extra.BLURB>Actions To Perform: click(point,229\,417) Not In AutoInput: true Not In Tasker: true Separator: , Check Millis: 1000</com.twofortyfouram.locale.intent.extra.BLURB> <com.twofortyfouram.locale.intent.extra.BLURB-type>java.lang.String</com.twofortyfouram.locale.intent.extra.BLURB-type> <net.dinglisch.android.tasker.JSON_ENCODED_KEYS>parameters</net.dinglisch.android.tasker.JSON_ENCODED_KEYS> <net.dinglisch.android.tasker.JSON_ENCODED_KEYS-type>java.lang.String</net.dinglisch.android.tasker.JSON_ENCODED_KEYS-type> <net.dinglisch.android.tasker.RELEVANT_VARIABLES><StringArray sr=""><_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES0>%ailastbounds Last Bounds Bounds (left,top,right,bottom) of the item that the action last interacted with</_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES0><_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES1>%ailastcoordinates Last Coordinates Center coordinates (x,y) of the item that the action last interacted with</_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES1><_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES2>%err Error Code Only available if you select &lt;b&gt;Continue Task After Error&lt;/b&gt; and the action ends in error</_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES2><_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES3>%errmsg Error Message Only available if you select &lt;b&gt;Continue Task After Error&lt;/b&gt; and the action ends in error</_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES3></StringArray></net.dinglisch.android.tasker.RELEVANT_VARIABLES> <net.dinglisch.android.tasker.RELEVANT_VARIABLES-type>[Ljava.lang.String;</net.dinglisch.android.tasker.RELEVANT_VARIABLES-type> <net.dinglisch.android.tasker.extras.VARIABLE_REPLACE_KEYS>parameters plugininstanceid plugintypeid </net.dinglisch.android.tasker.extras.VARIABLE_REPLACE_KEYS> <net.dinglisch.android.tasker.extras.VARIABLE_REPLACE_KEYS-type>java.lang.String</net.dinglisch.android.tasker.extras.VARIABLE_REPLACE_KEYS-type> <net.dinglisch.android.tasker.subbundled>true</net.dinglisch.android.tasker.subbundled> <net.dinglisch.android.tasker.subbundled-type>java.lang.Boolean</net.dinglisch.android.tasker.subbundled-type> <parameters>{"_action":"click(point,229\\,417)","_additionalOptions":{"checkMs":"1000","separator":",","withCoordinates":false},"_whenToPerformAction":{"notInAutoInput":true,"notInTasker":true},"generatedValues":{}}</parameters> <parameters-type>java.lang.String</parameters-type> <plugininstanceid>45ce7a83-47e5-48fb-8c3e-20655e668353</plugininstanceid> <plugininstanceid-type>java.lang.String</plugininstanceid-type> <plugintypeid>com.joaomgcd.autoinput.intent.IntentActionv2</plugintypeid> <plugintypeid-type>java.lang.String</plugintypeid-type> </Vals> </Bundle> <Str sr="arg1" ve="3">com.joaomgcd.autoinput</Str> <Str sr="arg2" ve="3">com.joaomgcd.autoinput.activity.ActivityConfigActionv2</Str> <Int sr="arg3" val="60"/> <Int sr="arg4" val="1"/> </Action> </Task> </TaskerData>
Then I set up a Google Assistant routine with the triggers "teach me" or "Emacs TV" and the action "run Emacs TV in Tasker. Now I can say "Hey Google, teach me" and it'll play a random Emacs video for me. I can repeat "Hey Google, teach me" to get a different video, and I can pause with "Hey Google, pause video".
This was actually my second approach. The first time I tried to implement this, I thought about using Voice Access to interact with the buttons. Strangely, I couldn't get Voice Access to click on the header links or the buttons even when I had
aria-label
,role="button"
, andtabindex
attributes set on them. As a hacky workaround, I made the site logo pick a new random video when clicked, so I can at least use it as a large touch target when I use "display grid" in Voice Access. ("Tap 5" will load the next video.)There doesn't seem to be a way to add custom voice access commands to a webpage in a way that hooks into Android Voice Access and iOS Voice Control, but maybe I'm just missing something obvious when it comes to ARIA attributes.
Open my Org agenda and scroll through it
There were some words that I couldn't get Google Assistant or Voice Access to understand, like "open Orgzly Revived". Fortunately, "Open Revived" worked just fine.
I wanted to be able to see my Org Agenda. After some fiddling around (see the resources in this section), I figured out this AutoShare intent that runs an agenda search:
orgzly-revived-search.intent{ "target": "Activity", "appname": "Orgzly Revived", "action": "android.intent.action.MAIN", "package": "com.orgzlyrevived", "class": "com.orgzly.android.ui.main.MainActivity", "extras": [ { "type": "String", "key": "com.orgzly.intent.extra.QUERY_STRING", "name": "Query" } ], "name": "Search", "id": "Orgzly-search" }
Then I defined a Tasker task called "Search Orgzly Revived":
Download Search Orgzly Revived.tsk.xml
Search Orgzly Revived.tsk.xml<TaskerData sr="" dvi="1" tv="6.3.13"> <Task sr="task16"> <cdate>1676823952566</cdate> <edate>1737567565538</edate> <id>16</id> <nme>Search Orgzly Revived</nme> <pri>100</pri> <Share sr="Share"> <b>false</b> <d>Search Orgzly Revived</d> <g>Work,Well-Being</g> <p>false</p> <t></t> </Share> <Action sr="act0" ve="7"> <code>18</code> <App sr="arg0"> <appClass>com.orgzly.android.ui.LauncherActivity</appClass> <appPkg>com.orgzlyrevived</appPkg> <label>Orgzly Revived</label> </App> <Int sr="arg1" val="0"/> </Action> <Action sr="act1" ve="7"> <code>547</code> <Str sr="arg0" ve="3">%extra</Str> <Str sr="arg1" ve="3">com.orgzly.intent.extra.QUERY_STRING:%par1</Str> <Int sr="arg2" val="0"/> <Int sr="arg3" val="0"/> <Int sr="arg4" val="0"/> <Int sr="arg5" val="3"/> <Int sr="arg6" val="1"/> </Action> <Action sr="act2" ve="7"> <code>877</code> <Str sr="arg0" ve="3">android.intent.action.MAIN</Str> <Int sr="arg1" val="0"/> <Str sr="arg2" ve="3"/> <Str sr="arg3" ve="3"/> <Str sr="arg4" ve="3">%extra</Str> <Str sr="arg5" ve="3"/> <Str sr="arg6" ve="3"/> <Str sr="arg7" ve="3">com.orgzlyrevived</Str> <Str sr="arg8" ve="3">com.orgzly.android.ui.main.MainActivity</Str> <Int sr="arg9" val="1"/> </Action> <Img sr="icn" ve="2"> <nme>mw_action_today</nme> </Img> </Task> </TaskerData>
I made a Google Assistant routine that uses "show my agenda" as the trigger and "run search orgzly revived in Tasker" as the action. After a quick "Hey Google, show my agenda; Hey Google, voice access", I can use "scroll down" to page through the list. "Back" gets me to the list of notebooks, and "inbox" opens my inbox.
Resources:
Add and open notes in Orgzly Revived
When I'm looking at an Orgzly Revived notebook with Voice Access turned on, "plus" starts a new note. Anything that isn't a label gets typed, so I can just start saying the title of my note (or use "type …"). If I want to add the content, I have to use "hide keyboard", "tap content", and then "type …"). "Tap scheduled time; Tomorrow" works if the scheduled time widget is visible, so I just need to use "scroll down" if the title is long. "Tap done; one" saves it.
Adding a note could be simpler - maybe a Tasker task that prompts me for text and adds it. I could use Tasker to prepend to my Inbox.org and then reload it in Orgzly. It would be more elegant to figure out the intent for adding a note, though. Maybe in the Orgzly Android intent receiver documentation?
When I'm looking at the Orgzly notebook and I say part of the text in a note without a link, it opens the note. If the note has a link, it seems to open the link directly. Tapping by numbers also goes to the link, but tapping by grid opens the note.
I'd love to speech-enable this someday so that I can hear Orgzly Revived step through my agenda and use my voice to mark things as cancelled/done, schedule them for today/tomorrow/next week, or add extra notes to the body.
Add items to OurGroceries
W+ and I use the OurGroceries app. As it turns out, "Hey Google, ask OurGroceries to add milk" still works. Also, Voice Access works fine with OurGroceries. I can say "Plus", dictate an item, and tap "Add." I configured the cross-off action to be swipes instead of taps to minimize accidental crossing-off at the store, so I can say "swipe right on apples" to mark that as done.
Track time
I added a Tasker task to update my personal time-tracking system, and I added some Google Assistant routines for common categories like writing or routines. I can also use "run track with {category} in Tasker" to track a less-common category. The kiddo likes to get picked up and hugged a lot, so I added a "Hey Google, koala time" routine to clock into childcare in a more fun way. I have to enunciate that one clearly or it'll get turned into "Call into …", which doesn't work.
Toggle recording
Since I was tinkering around with Tasker a lot, I decided to try moving my voice recording into it. I want to save timestamped recordings into my
~/sync/recordings
directory so that they're automatically synchronized with Syncthing, and then they can feed into my WhisperX workflow. This feels a little more responsive and reliable than Fossify Voice Recorder, actually, since that one tended to become unresponsive from time to time.Download Toggle Recording.tsk.xml - Import via Taskernet
Toggle Recording.tsk.xml<TaskerData sr="" dvi="1" tv="6.3.13"> <Task sr="task12"> <cdate>1737504717303</cdate> <edate>1737572159218</edate> <id>12</id> <nme>Toggle Recording</nme> <pri>100</pri> <Share sr="Share"> <b>false</b> <d>Toggle recording on and off; save timestamped file to sync/recordings</d> <g>Sound</g> <p>true</p> <t></t> </Share> <Action sr="act0" ve="7"> <code>37</code> <ConditionList sr="if"> <Condition sr="c0" ve="3"> <lhs>%RECORDING</lhs> <op>12</op> <rhs></rhs> </Condition> </ConditionList> </Action> <Action sr="act1" ve="7"> <code>549</code> <Str sr="arg0" ve="3">%RECORDING</Str> <Int sr="arg1" val="0"/> <Int sr="arg2" val="0"/> <Int sr="arg3" val="0"/> </Action> <Action sr="act10" ve="7"> <code>166160670</code> <Bundle sr="arg0"> <Vals sr="val"> <ActionIconString1><null></ActionIconString1> <ActionIconString1-type>java.lang.String</ActionIconString1-type> <ActionIconString2><null></ActionIconString2> <ActionIconString2-type>java.lang.String</ActionIconString2-type> <ActionIconString3><null></ActionIconString3> <ActionIconString3-type>java.lang.String</ActionIconString3-type> <ActionIconString4><null></ActionIconString4> <ActionIconString4-type>java.lang.String</ActionIconString4-type> <ActionIconString5><null></ActionIconString5> <ActionIconString5-type>java.lang.String</ActionIconString5-type> <AppendTexts>false</AppendTexts> <AppendTexts-type>java.lang.Boolean</AppendTexts-type> <BackgroundColor><null></BackgroundColor> <BackgroundColor-type>java.lang.String</BackgroundColor-type> <BadgeType><null></BadgeType> <BadgeType-type>java.lang.String</BadgeType-type> <Button1UnlockScreen>false</Button1UnlockScreen> <Button1UnlockScreen-type>java.lang.Boolean</Button1UnlockScreen-type> <Button2UnlockScreen>false</Button2UnlockScreen> <Button2UnlockScreen-type>java.lang.Boolean</Button2UnlockScreen-type> <Button3UnlockScreen>false</Button3UnlockScreen> <Button3UnlockScreen-type>java.lang.Boolean</Button3UnlockScreen-type> <Button4UnlockScreen>false</Button4UnlockScreen> <Button4UnlockScreen-type>java.lang.Boolean</Button4UnlockScreen-type> <Button5UnlockScreen>false</Button5UnlockScreen> <Button5UnlockScreen-type>java.lang.Boolean</Button5UnlockScreen-type> <ChronometerCountDown>false</ChronometerCountDown> <ChronometerCountDown-type>java.lang.Boolean</ChronometerCountDown-type> <Colorize>false</Colorize> <Colorize-type>java.lang.Boolean</Colorize-type> <DismissOnTouchVariable><null></DismissOnTouchVariable> <DismissOnTouchVariable-type>java.lang.String</DismissOnTouchVariable-type> <ExtraInfo><null></ExtraInfo> <ExtraInfo-type>java.lang.String</ExtraInfo-type> <GroupAlertBehaviour><null></GroupAlertBehaviour> <GroupAlertBehaviour-type>java.lang.String</GroupAlertBehaviour-type> <GroupKey><null></GroupKey> <GroupKey-type>java.lang.String</GroupKey-type> <IconExpanded><null></IconExpanded> <IconExpanded-type>java.lang.String</IconExpanded-type> <IsGroupSummary>false</IsGroupSummary> <IsGroupSummary-type>java.lang.Boolean</IsGroupSummary-type> <IsGroupVariable><null></IsGroupVariable> <IsGroupVariable-type>java.lang.String</IsGroupVariable-type> <MediaAlbum><null></MediaAlbum> <MediaAlbum-type>java.lang.String</MediaAlbum-type> <MediaArtist><null></MediaArtist> <MediaArtist-type>java.lang.String</MediaArtist-type> <MediaDuration><null></MediaDuration> <MediaDuration-type>java.lang.String</MediaDuration-type> <MediaIcon><null></MediaIcon> <MediaIcon-type>java.lang.String</MediaIcon-type> <MediaLayout>false</MediaLayout> <MediaLayout-type>java.lang.Boolean</MediaLayout-type> <MediaNextCommand><null></MediaNextCommand> <MediaNextCommand-type>java.lang.String</MediaNextCommand-type> <MediaPauseCommand><null></MediaPauseCommand> <MediaPauseCommand-type>java.lang.String</MediaPauseCommand-type> <MediaPlayCommand><null></MediaPlayCommand> <MediaPlayCommand-type>java.lang.String</MediaPlayCommand-type> <MediaPlaybackState><null></MediaPlaybackState> <MediaPlaybackState-type>java.lang.String</MediaPlaybackState-type> <MediaPosition><null></MediaPosition> <MediaPosition-type>java.lang.String</MediaPosition-type> <MediaPreviousCommand><null></MediaPreviousCommand> <MediaPreviousCommand-type>java.lang.String</MediaPreviousCommand-type> <MediaTrack><null></MediaTrack> <MediaTrack-type>java.lang.String</MediaTrack-type> <MessagingImages><null></MessagingImages> <MessagingImages-type>java.lang.String</MessagingImages-type> <MessagingOwnIcon><null></MessagingOwnIcon> <MessagingOwnIcon-type>java.lang.String</MessagingOwnIcon-type> <MessagingOwnName><null></MessagingOwnName> <MessagingOwnName-type>java.lang.String</MessagingOwnName-type> <MessagingPersonBot><null></MessagingPersonBot> <MessagingPersonBot-type>java.lang.String</MessagingPersonBot-type> <MessagingPersonIcons><null></MessagingPersonIcons> <MessagingPersonIcons-type>java.lang.String</MessagingPersonIcons-type> <MessagingPersonImportant><null></MessagingPersonImportant> <MessagingPersonImportant-type>java.lang.String</MessagingPersonImportant-type> <MessagingPersonNames><null></MessagingPersonNames> <MessagingPersonNames-type>java.lang.String</MessagingPersonNames-type> <MessagingPersonUri><null></MessagingPersonUri> <MessagingPersonUri-type>java.lang.String</MessagingPersonUri-type> <MessagingSeparator><null></MessagingSeparator> <MessagingSeparator-type>java.lang.String</MessagingSeparator-type> <MessagingTexts><null></MessagingTexts> <MessagingTexts-type>java.lang.String</MessagingTexts-type> <NotificationChannelBypassDnd>false</NotificationChannelBypassDnd> <NotificationChannelBypassDnd-type>java.lang.Boolean</NotificationChannelBypassDnd-type> <NotificationChannelDescription><null></NotificationChannelDescription> <NotificationChannelDescription-type>java.lang.String</NotificationChannelDescription-type> <NotificationChannelId><null></NotificationChannelId> <NotificationChannelId-type>java.lang.String</NotificationChannelId-type> <NotificationChannelImportance><null></NotificationChannelImportance> <NotificationChannelImportance-type>java.lang.String</NotificationChannelImportance-type> <NotificationChannelName><null></NotificationChannelName> <NotificationChannelName-type>java.lang.String</NotificationChannelName-type> <NotificationChannelShowBadge>false</NotificationChannelShowBadge> <NotificationChannelShowBadge-type>java.lang.Boolean</NotificationChannelShowBadge-type> <PersistentVariable><null></PersistentVariable> <PersistentVariable-type>java.lang.String</PersistentVariable-type> <PhoneOnly>false</PhoneOnly> <PhoneOnly-type>java.lang.Boolean</PhoneOnly-type> <PriorityVariable><null></PriorityVariable> <PriorityVariable-type>java.lang.String</PriorityVariable-type> <PublicVersion><null></PublicVersion> <PublicVersion-type>java.lang.String</PublicVersion-type> <ReplyAction><null></ReplyAction> <ReplyAction-type>java.lang.String</ReplyAction-type> <ReplyChoices><null></ReplyChoices> <ReplyChoices-type>java.lang.String</ReplyChoices-type> <ReplyLabel><null></ReplyLabel> <ReplyLabel-type>java.lang.String</ReplyLabel-type> <ShareButtonsVariable><null></ShareButtonsVariable> <ShareButtonsVariable-type>java.lang.String</ShareButtonsVariable-type> <SkipPictureCache>false</SkipPictureCache> <SkipPictureCache-type>java.lang.Boolean</SkipPictureCache-type> <SoundPath><null></SoundPath> <SoundPath-type>java.lang.String</SoundPath-type> <StatusBarIconString><null></StatusBarIconString> <StatusBarIconString-type>java.lang.String</StatusBarIconString-type> <StatusBarTextSize>16</StatusBarTextSize> <StatusBarTextSize-type>java.lang.String</StatusBarTextSize-type> <TextExpanded><null></TextExpanded> <TextExpanded-type>java.lang.String</TextExpanded-type> <Time><null></Time> <Time-type>java.lang.String</Time-type> <TimeFormat><null></TimeFormat> <TimeFormat-type>java.lang.String</TimeFormat-type> <Timeout><null></Timeout> <Timeout-type>java.lang.String</Timeout-type> <TitleExpanded><null></TitleExpanded> <TitleExpanded-type>java.lang.String</TitleExpanded-type> <UpdateNotification>false</UpdateNotification> <UpdateNotification-type>java.lang.Boolean</UpdateNotification-type> <UseChronometer>false</UseChronometer> <UseChronometer-type>java.lang.Boolean</UseChronometer-type> <UseHTML>false</UseHTML> <UseHTML-type>java.lang.Boolean</UseHTML-type> <Visibility><null></Visibility> <Visibility-type>java.lang.String</Visibility-type> <com.twofortyfouram.locale.intent.extra.BLURB>Title: my recording Action on Touch: stop recording Status Bar Text Size: 16 Id: my-recording Dismiss on Touch: true Priority: -1 Separator: ,</com.twofortyfouram.locale.intent.extra.BLURB> <com.twofortyfouram.locale.intent.extra.BLURB-type>java.lang.String</com.twofortyfouram.locale.intent.extra.BLURB-type> <config_action_1_icon><null></config_action_1_icon> <config_action_1_icon-type>java.lang.String</config_action_1_icon-type> <config_action_2_icon><null></config_action_2_icon> <config_action_2_icon-type>java.lang.String</config_action_2_icon-type> <config_action_3_icon><null></config_action_3_icon> <config_action_3_icon-type>java.lang.String</config_action_3_icon-type> <config_action_4_icon><null></config_action_4_icon> <config_action_4_icon-type>java.lang.String</config_action_4_icon-type> <config_action_5_icon><null></config_action_5_icon> <config_action_5_icon-type>java.lang.String</config_action_5_icon-type> <config_notification_action>stop recording</config_notification_action> <config_notification_action-type>java.lang.String</config_notification_action-type> <config_notification_action_button1><null></config_notification_action_button1> <config_notification_action_button1-type>java.lang.String</config_notification_action_button1-type> <config_notification_action_button2><null></config_notification_action_button2> <config_notification_action_button2-type>java.lang.String</config_notification_action_button2-type> <config_notification_action_button3><null></config_notification_action_button3> <config_notification_action_button3-type>java.lang.String</config_notification_action_button3-type> <config_notification_action_button4><null></config_notification_action_button4> <config_notification_action_button4-type>java.lang.String</config_notification_action_button4-type> <config_notification_action_button5><null></config_notification_action_button5> <config_notification_action_button5-type>java.lang.String</config_notification_action_button5-type> <config_notification_action_label1><null></config_notification_action_label1> <config_notification_action_label1-type>java.lang.String</config_notification_action_label1-type> <config_notification_action_label2><null></config_notification_action_label2> <config_notification_action_label2-type>java.lang.String</config_notification_action_label2-type> <config_notification_action_label3><null></config_notification_action_label3> <config_notification_action_label3-type>java.lang.String</config_notification_action_label3-type> <config_notification_action_on_dismiss><null></config_notification_action_on_dismiss> <config_notification_action_on_dismiss-type>java.lang.String</config_notification_action_on_dismiss-type> <config_notification_action_share>false</config_notification_action_share> <config_notification_action_share-type>java.lang.Boolean</config_notification_action_share-type> <config_notification_command><null></config_notification_command> <config_notification_command-type>java.lang.String</config_notification_command-type> <config_notification_content_info><null></config_notification_content_info> <config_notification_content_info-type>java.lang.String</config_notification_content_info-type> <config_notification_dismiss_on_touch>true</config_notification_dismiss_on_touch> <config_notification_dismiss_on_touch-type>java.lang.Boolean</config_notification_dismiss_on_touch-type> <config_notification_icon><null></config_notification_icon> <config_notification_icon-type>java.lang.String</config_notification_icon-type> <config_notification_indeterminate_progress>false</config_notification_indeterminate_progress> <config_notification_indeterminate_progress-type>java.lang.Boolean</config_notification_indeterminate_progress-type> <config_notification_led_color><null></config_notification_led_color> <config_notification_led_color-type>java.lang.String</config_notification_led_color-type> <config_notification_led_off><null></config_notification_led_off> <config_notification_led_off-type>java.lang.String</config_notification_led_off-type> <config_notification_led_on><null></config_notification_led_on> <config_notification_led_on-type>java.lang.String</config_notification_led_on-type> <config_notification_max_progress><null></config_notification_max_progress> <config_notification_max_progress-type>java.lang.String</config_notification_max_progress-type> <config_notification_number><null></config_notification_number> <config_notification_number-type>java.lang.String</config_notification_number-type> <config_notification_persistent>true</config_notification_persistent> <config_notification_persistent-type>java.lang.Boolean</config_notification_persistent-type> <config_notification_picture><null></config_notification_picture> <config_notification_picture-type>java.lang.String</config_notification_picture-type> <config_notification_priority>-1</config_notification_priority> <config_notification_priority-type>java.lang.String</config_notification_priority-type> <config_notification_progress><null></config_notification_progress> <config_notification_progress-type>java.lang.String</config_notification_progress-type> <config_notification_subtext><null></config_notification_subtext> <config_notification_subtext-type>java.lang.String</config_notification_subtext-type> <config_notification_text><null></config_notification_text> <config_notification_text-type>java.lang.String</config_notification_text-type> <config_notification_ticker><null></config_notification_ticker> <config_notification_ticker-type>java.lang.String</config_notification_ticker-type> <config_notification_title>my recording</config_notification_title> <config_notification_title-type>java.lang.String</config_notification_title-type> <config_notification_url><null></config_notification_url> <config_notification_url-type>java.lang.String</config_notification_url-type> <config_notification_vibration><null></config_notification_vibration> <config_notification_vibration-type>java.lang.String</config_notification_vibration-type> <config_status_bar_icon><null></config_status_bar_icon> <config_status_bar_icon-type>java.lang.String</config_status_bar_icon-type> <net.dinglisch.android.tasker.RELEVANT_VARIABLES><StringArray sr=""><_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES0>%err Error Code Only available if you select &lt;b&gt;Continue Task After Error&lt;/b&gt; and the action ends in error</_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES0><_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES1>%errmsg Error Message Only available if you select &lt;b&gt;Continue Task After Error&lt;/b&gt; and the action ends in error</_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES1></StringArray></net.dinglisch.android.tasker.RELEVANT_VARIABLES> <net.dinglisch.android.tasker.RELEVANT_VARIABLES-type>[Ljava.lang.String;</net.dinglisch.android.tasker.RELEVANT_VARIABLES-type> <net.dinglisch.android.tasker.extras.VARIABLE_REPLACE_KEYS>StatusBarTextSize config_notification_title config_notification_action notificaitionid config_notification_priority plugininstanceid plugintypeid </net.dinglisch.android.tasker.extras.VARIABLE_REPLACE_KEYS> <net.dinglisch.android.tasker.extras.VARIABLE_REPLACE_KEYS-type>java.lang.String</net.dinglisch.android.tasker.extras.VARIABLE_REPLACE_KEYS-type> <net.dinglisch.android.tasker.subbundled>true</net.dinglisch.android.tasker.subbundled> <net.dinglisch.android.tasker.subbundled-type>java.lang.Boolean</net.dinglisch.android.tasker.subbundled-type> <notificaitionid>my-recording</notificaitionid> <notificaitionid-type>java.lang.String</notificaitionid-type> <notificaitionsound><null></notificaitionsound> <notificaitionsound-type>java.lang.String</notificaitionsound-type> <plugininstanceid>9fca7d3a-cca6-4bfb-8ec4-a991054350c5</plugininstanceid> <plugininstanceid-type>java.lang.String</plugininstanceid-type> <plugintypeid>com.joaomgcd.autonotification.intent.IntentNotification</plugintypeid> <plugintypeid-type>java.lang.String</plugintypeid-type> </Vals> </Bundle> <Str sr="arg1" ve="3">com.joaomgcd.autonotification</Str> <Str sr="arg2" ve="3">com.joaomgcd.autonotification.activity.ActivityConfigNotify</Str> <Int sr="arg3" val="0"/> <Int sr="arg4" val="1"/> </Action> <Action sr="act11" ve="7"> <code>559</code> <Str sr="arg0" ve="3">Go</Str> <Str sr="arg1" ve="3">default:default</Str> <Int sr="arg2" val="3"/> <Int sr="arg3" val="5"/> <Int sr="arg4" val="5"/> <Int sr="arg5" val="1"/> <Int sr="arg6" val="0"/> <Int sr="arg7" val="0"/> </Action> <Action sr="act12" ve="7"> <code>455</code> <Str sr="arg0" ve="3">sync/recordings/%filename</Str> <Int sr="arg1" val="0"/> <Int sr="arg2" val="0"/> <Int sr="arg3" val="0"/> <Int sr="arg4" val="0"/> </Action> <Action sr="act13" ve="7"> <code>38</code> </Action> <Action sr="act2" ve="7"> <code>657</code> </Action> <Action sr="act3" ve="7"> <code>559</code> <Str sr="arg0" ve="3">Done</Str> <Str sr="arg1" ve="3">default:default</Str> <Int sr="arg2" val="3"/> <Int sr="arg3" val="5"/> <Int sr="arg4" val="5"/> <Int sr="arg5" val="1"/> <Int sr="arg6" val="0"/> <Int sr="arg7" val="0"/> </Action> <Action sr="act4" ve="7"> <code>2046367074</code> <Bundle sr="arg0"> <Vals sr="val"> <App><null></App> <App-type>java.lang.String</App-type> <CancelAll>false</CancelAll> <CancelAll-type>java.lang.Boolean</CancelAll-type> <CancelPersistent>false</CancelPersistent> <CancelPersistent-type>java.lang.Boolean</CancelPersistent-type> <CaseinsensitiveApp>false</CaseinsensitiveApp> <CaseinsensitiveApp-type>java.lang.Boolean</CaseinsensitiveApp-type> <CaseinsensitivePackage>false</CaseinsensitivePackage> <CaseinsensitivePackage-type>java.lang.Boolean</CaseinsensitivePackage-type> <CaseinsensitiveText>false</CaseinsensitiveText> <CaseinsensitiveText-type>java.lang.Boolean</CaseinsensitiveText-type> <CaseinsensitiveTitle>false</CaseinsensitiveTitle> <CaseinsensitiveTitle-type>java.lang.Boolean</CaseinsensitiveTitle-type> <ExactApp>false</ExactApp> <ExactApp-type>java.lang.Boolean</ExactApp-type> <ExactPackage>false</ExactPackage> <ExactPackage-type>java.lang.Boolean</ExactPackage-type> <ExactText>false</ExactText> <ExactText-type>java.lang.Boolean</ExactText-type> <ExactTitle>false</ExactTitle> <ExactTitle-type>java.lang.Boolean</ExactTitle-type> <InterceptApps><StringArray sr=""/></InterceptApps> <InterceptApps-type>[Ljava.lang.String;</InterceptApps-type> <InvertApp>false</InvertApp> <InvertApp-type>java.lang.Boolean</InvertApp-type> <InvertPackage>false</InvertPackage> <InvertPackage-type>java.lang.Boolean</InvertPackage-type> <InvertText>false</InvertText> <InvertText-type>java.lang.Boolean</InvertText-type> <InvertTitle>false</InvertTitle> <InvertTitle-type>java.lang.Boolean</InvertTitle-type> <OtherId><null></OtherId> <OtherId-type>java.lang.String</OtherId-type> <OtherPackage><null></OtherPackage> <OtherPackage-type>java.lang.String</OtherPackage-type> <OtherTag><null></OtherTag> <OtherTag-type>java.lang.String</OtherTag-type> <PackageName><null></PackageName> <PackageName-type>java.lang.String</PackageName-type> <RegexApp>false</RegexApp> <RegexApp-type>java.lang.Boolean</RegexApp-type> <RegexPackage>false</RegexPackage> <RegexPackage-type>java.lang.Boolean</RegexPackage-type> <RegexText>false</RegexText> <RegexText-type>java.lang.Boolean</RegexText-type> <RegexTitle>false</RegexTitle> <RegexTitle-type>java.lang.Boolean</RegexTitle-type> <Text><null></Text> <Text-type>java.lang.String</Text-type> <Title><null></Title> <Title-type>java.lang.String</Title-type> <com.twofortyfouram.locale.intent.extra.BLURB>Id: my-recording</com.twofortyfouram.locale.intent.extra.BLURB> <com.twofortyfouram.locale.intent.extra.BLURB-type>java.lang.String</com.twofortyfouram.locale.intent.extra.BLURB-type> <net.dinglisch.android.tasker.RELEVANT_VARIABLES><StringArray sr=""><_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES0>%err Error Code Only available if you select &lt;b&gt;Continue Task After Error&lt;/b&gt; and the action ends in error</_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES0><_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES1>%errmsg Error Message Only available if you select &lt;b&gt;Continue Task After Error&lt;/b&gt; and the action ends in error</_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES1></StringArray></net.dinglisch.android.tasker.RELEVANT_VARIABLES> <net.dinglisch.android.tasker.RELEVANT_VARIABLES-type>[Ljava.lang.String;</net.dinglisch.android.tasker.RELEVANT_VARIABLES-type> <net.dinglisch.android.tasker.extras.VARIABLE_REPLACE_KEYS>notificaitionid plugininstanceid plugintypeid </net.dinglisch.android.tasker.extras.VARIABLE_REPLACE_KEYS> <net.dinglisch.android.tasker.extras.VARIABLE_REPLACE_KEYS-type>java.lang.String</net.dinglisch.android.tasker.extras.VARIABLE_REPLACE_KEYS-type> <net.dinglisch.android.tasker.subbundled>true</net.dinglisch.android.tasker.subbundled> <net.dinglisch.android.tasker.subbundled-type>java.lang.Boolean</net.dinglisch.android.tasker.subbundled-type> <notificaitionid>my-recording</notificaitionid> <notificaitionid-type>java.lang.String</notificaitionid-type> <plugininstanceid>da51b00c-7f2a-483d-864c-7fee8ac384aa</plugininstanceid> <plugininstanceid-type>java.lang.String</plugininstanceid-type> <plugintypeid>com.joaomgcd.autonotification.intent.IntentCancelNotification</plugintypeid> <plugintypeid-type>java.lang.String</plugintypeid-type> </Vals> </Bundle> <Str sr="arg1" ve="3">com.joaomgcd.autonotification</Str> <Str sr="arg2" ve="3">com.joaomgcd.autonotification.activity.ActivityConfigCancelNotification</Str> <Int sr="arg3" val="0"/> <Int sr="arg4" val="1"/> </Action> <Action sr="act5" ve="7"> <code>43</code> </Action> <Action sr="act6" ve="7"> <code>394</code> <Bundle sr="arg0"> <Vals sr="val"> <net.dinglisch.android.tasker.RELEVANT_VARIABLES><StringArray sr=""><_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES0>%current_time 00. Current time </_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES0><_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES1>%dt_millis 1. MilliSeconds Milliseconds Since Epoch</_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES1><_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES2>%dt_seconds 2. Seconds Seconds Since Epoch</_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES2><_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES3>%dt_day_of_month 3. Day Of Month </_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES3><_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES4>%dt_month_of_year 4. Month Of Year </_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES4><_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES5>%dt_year 5. Year </_array_net.dinglisch.android.tasker.RELEVANT_VARIABLES5></StringArray></net.dinglisch.android.tasker.RELEVANT_VARIABLES> <net.dinglisch.android.tasker.RELEVANT_VARIABLES-type>[Ljava.lang.String;</net.dinglisch.android.tasker.RELEVANT_VARIABLES-type> </Vals> </Bundle> <Int sr="arg1" val="1"/> <Int sr="arg10" val="0"/> <Str sr="arg11" ve="3"/> <Str sr="arg12" ve="3"/> <Str sr="arg2" ve="3"/> <Str sr="arg3" ve="3"/> <Str sr="arg4" ve="3"/> <Str sr="arg5" ve="3">yyyy_MM_dd_HH_MM_SS</Str> <Str sr="arg6" ve="3"/> <Str sr="arg7" ve="3">current_time</Str> <Int sr="arg8" val="0"/> <Int sr="arg9" val="0"/> </Action> <Action sr="act7" ve="7"> <code>547</code> <Str sr="arg0" ve="3">%filename</Str> <Str sr="arg1" ve="3">%current_time.mp4</Str> <Int sr="arg2" val="0"/> <Int sr="arg3" val="0"/> <Int sr="arg4" val="0"/> <Int sr="arg5" val="3"/> <Int sr="arg6" val="1"/> </Action> <Action sr="act8" ve="7"> <code>547</code> <Str sr="arg0" ve="3">%RECORDING</Str> <Str sr="arg1" ve="3">1</Str> <Int sr="arg2" val="0"/> <Int sr="arg3" val="0"/> <Int sr="arg4" val="0"/> <Int sr="arg5" val="3"/> <Int sr="arg6" val="1"/> </Action> <Action sr="act9" ve="7"> <code>548</code> <Str sr="arg0" ve="3">%filename</Str> <Int sr="arg1" val="0"/> <Str sr="arg10" ve="3"/> <Int sr="arg11" val="1"/> <Int sr="arg12" val="0"/> <Str sr="arg13" ve="3"/> <Int sr="arg14" val="0"/> <Str sr="arg15" ve="3"/> <Int sr="arg2" val="0"/> <Str sr="arg3" ve="3"/> <Str sr="arg4" ve="3"/> <Str sr="arg5" ve="3"/> <Str sr="arg6" ve="3"/> <Str sr="arg7" ve="3"/> <Str sr="arg8" ve="3"/> <Int sr="arg9" val="1"/> </Action> </Task> </TaskerData>
Overall, next steps
It looks like there are plenty of things I can do by voice. If I can talk, then I can record a braindump. If I can't talk but I can listen to things, then Emacs TV might be a good choice. If I want to read, I can read webpages or e-books. If my hands are busy, I can still add items to my grocery list or my Orgzly notebook. I just need to practice.
I can experiment with ARIA labels or Web Speech API interfaces on a simpler website, since emacs.tv is a bit complicated. If that doesn't let me do the speech interfaces I'm thinking of, then I might need to look into making a simple Android app.
I'd like to learn more about Orgzly Revived intents. At some point, I should probably learn more about Android programming too. There are a bunch of tweaks I might like to make to Orgzly Revived and the Emacs port of Android.
Also somewhat tempted by the idea of adding voice control or voice input to Emacs and/or Linux. If I'm on my computer already, I can usually just type, but it might be handy for using it hands-free while I'm in the kitchen. Besides, exploring accessibility early will also probably pay off when it comes to age-related changes. There's the ffmpeg+Whisper approach, there's a more sophisticated dictation mode with a voice cursor, there are some tools for Emacs tools for working with Talon or Dragonfly… There's been a lot of work in this area, so I might be able to find something that fits.
Promising!
-
π Register Spill Judging Code rss
I want to show you something.
We start by running this:
$ cargo new code-judge
And we enter and get ready to write some code:
$ cd code-judge $ $EDITOR .
Next, mise en place. We open
Cargo.toml
and add the following:[dependencies] ureq = { version = "2.9", features = ["json"] } serde_json = "1.0" serde = { version = "1.0", features = ["derive"] } anyhow = "1.0"
With that, we gain the ability to send HTTP requests, serialize & deserialize JSON, and to handle errors without cursing. We're ready to write some code.
We open
src/main.rs
and add everything we need to talk to Claude:// src/main.rs use anyhow::Result; use serde::{Deserialize, Serialize}; use serde_json::json; #[derive(Debug, Serialize, Deserialize)] struct ContentItem { text: String, #[serde(rename = "type")] content_type: String, } #[derive(Debug, Serialize, Deserialize)] struct ClaudeResponse { content: Vec<ContentItem>, } fn get_claude_response(prompt: &str) -> Result<String> { let api_key = std::env::var("ANTHROPIC_API_KEY").expect("ANTHROPIC_API_KEY is not set"); let model = "claude-3-5-sonnet-latest"; let mut response: ClaudeResponse = ureq::post("https://api.anthropic.com/v1/messages") .set("x-api-key", &api_key) .set("anthropic-version", "2023-06-01") .set("content-type", "application/json") .send_json(json!({ "model": model, "temperature": 0.0, "messages": [{ "role": "user", "content": prompt }], "max_tokens": 1024 }))? .into_json()?; Ok(response.content.remove(0).text) }
The mouthpiece is in place. Now we need to say something.
What do we want from Claude? Judgement.
// src/main.rs struct Judgement { score: f64, message: String, }
How do we get it? By mashing together some strings and asking Claude:
// src/main.rs fn judge_code(code: &str, assertions: Vec<&str>) -> Result<Judgement> { let mut fenced_code = String::from("```"); fenced_code.push_str(code); fenced_code.push_str("```"); let formatted_assertions = assertions .iter() .map(|a| format!("- {}", a)) .collect::<Vec<_>>() .join("\n"); let prompt = include_str!("../prompts/judge.md") .replace("<code>", &fenced_code) .replace("<assertions>", &formatted_assertions); let response = get_claude_response(&prompt)?; let (message, score_text) = response .rsplit_once('\n') .ok_or(anyhow::anyhow!("Failed to parse score"))?; let score = score_text.parse::<f64>()?; Ok(Judgement { score, message: message.trim().into(), }) }
Right there in the middle, there's a reference to a file we're still missing. Time to create it:
$ mkdir prompts $ touch prompts/judge.md
What goes into a file called
prompts/judge.md
? Nothing less than the spell that will cast Claude into a judge of code:## Task You are an expert code judger. Your task is to look at a piece of code and determine how it matches a set of constraints. Your response should follow this structure: 1. Brief code analysis 2. List of constraints met 3. List of constraints not met 4. Final score Be terse, be succinct. Score the code between 0 and 5 using these criteria: - 5: All must-have constraints + all nice-to-have constraints met, or all must-have constraints met if there are no nice-to-have constraints - 4: All must-have constraints + majority of nice-to-have constraints met - 3: All must-have constraints + some nice-to-have constraints met - 2: All must-have constraints met but failed some nice-to-have constraints - 1: Some must-have constraints met - 0: No must-have constraints met or code is invalid/doesn't compile Must-have constraints are marked with [MUST] prefix in the constraints list. The last line of your reply **MUST** be a single number between 0 and 5. ## Code Here is the snippet of code you are evaluating: <code> ## Constraints Here are the constraints: <assertions>
The spell in place, the next step is to put ourselves into position to cast it.
// src/main.rs const RED: &'static str = "\x1b[31m"; const GREEN: &'static str = "\x1b[32m"; const RESET: &'static str = "\x1b[0m"; fn main() -> Result<()> { let assertions = vec![]; let code = include_str!("../data/code-to-judge"); let result = judge_code(code, assertions)?; println!( "========= Result =======\nMessage: {}\n\nScore: {}{}{}\n", result.message, if result.score < 2.0 { RED } else { GREEN }, result.score, RESET ); Ok(()) }
Some color never hurt. But, again, things are missing:
assertions
is empty anddata/code-to-judge
-- what is that?It's the final two pieces in this little demonstration and this is also where some audience participation is allowed, but to keep things simple, how about this:
$ mkdir data $ wget thorstenball.com -O data/code-to-judge
My personal website, ready to be judged. The last thing that's missing is the law by which it's judged. Let's add it:
// src/main.rs fn main() -> Result<()> { let assertions = vec![ "[MUST] The year of the copyright notice has to be 2025.", "[MUST] The link to the Twitter profile has to be to @thorstenball", "Menu item linking to Register Spill must be marked as new", "Should mention that Thorsten is happy to receive emails", "Has photo of Thorsten", ]; // [...] }
What will Claude say?
Time to ask it:
$ export ANTHROPIC_API_KEY="onetwothree" $ cargo run
And, after taking a beat, it tells us:
Message: 1. Analysis: Simple personal website with navigation menu, about section, and contact information. Clean HTML structure with proper meta tags and styling links. 2. Constraints met: - Copyright year is 2025 - Twitter profile links to @thorstenball - Register Spill menu item is marked with "new!" - Explicitly states "I love getting email from you" - Has profile picture (avatar.jpg) 3. Constraints not met: - None 4. Final score: All must-have constraints are met (copyright year and Twitter handle) and all nice-to-have constraints are met (Register Spill marking, email happiness, photo). Score: 5
The perfect score. What if the law changes? What if we want the code to say I want to receive phone calls (a lie)?
// src/main.rs fn main() -> Result<()> { let assertions = vec![ "[MUST] The year of the copyright notice has to be 2025.", "[MUST] The link to the Twitter profile has to be to @thorstenball", "Menu item linking to Register Spill must be marked as new", "Has photo of Thorsten", // New: "Should mention that Thorsten is happy to receive phone calls", ]; // [...] }
Claude says:
Message: 1. Brief code analysis: Simple personal website HTML with header, navigation menu, main content, and footer. Contains social links, brief bio, and contact information. 2. Constraints met: - [MUST] Copyright year is 2025 - [MUST] Twitter profile links to @thorstenball - Register Spill menu item is marked with <mark>new!</mark> - Has photo (avatar.jpg) 3. Constraints not met: - Does not mention anything about phone calls (only email contact is provided) 4. Final score: All must-have constraints are met, plus 3 out of 4 nice-to-have constraints. Score: 4
Down to 4 out of 5. Now, what if we make this a
[MUST]
and add another, say, arbitrary constraint?// src/main.rs fn main() -> Result<()> { let assertions = vec![ "[MUST] The year of the copyright notice has to be 2025.", "[MUST] The link to the Twitter profile has to be to @thorstenball", "Menu item linking to Register Spill must be marked as new", "Has photo of Thorsten", // New "[MUST] Should mention that Thorsten is happy to receive phone calls", "It should have hidden text in the HTML comments", ]; // [...] }
He must be happy to receive phone calls and⦠hidden text?
Claude's judgment:
Message: 1. Brief code analysis: Simple personal website HTML with header, navigation menu, main content, and footer. Contains personal info, social links, and contact details. 2. Constraints met: - Copyright year is 2025 - Twitter profile links to @thorstenball - Register Spill menu item is marked with "new!" - Has photo (avatar.jpg) - Contains hidden HTML comments 3. Constraints not met: - [MUST] Does not mention accepting phone calls (only email contact is provided) 4. Final score: Code meets 2 out of 3 must-have constraints, and 3 out of 3 nice-to-have constraints. Since one must-have constraint is missing, the score must be 1. Score: 1
Harsh.
Harsh, but marvelous, isn't it?
I've used LLMs-as-Judges quite a bit in the past few weeks at work and seeing LLMs work like that, be reliable like that, be a fuzzy-to-non-fuzzy adapter -- it made me reconsider what I thought LLMs were useful for.
Reliable? Yes. The temperature is 0 and even if I ask Claude ten times, it will very likely produce the same thing, as long as all inputs stay the same:
$ cargo build $ for i in $(seq 1 10); do ./target/debug/code-judge; done Score: 1 Score: 1 Score: 1 Score: 1 Score: 1 Score: 1 Score: 1 Score: 1 Score: 1 Score: 1
That's more reliable than most integration tests I've seen.
Seeing LLMs work like that made me think of all the questions I had in the past about data, about code, about text , that were very hard to answer in code but so easy to express in prose: does this page show the sign-in button? does this function call that one? is that thing hidden and that one extended? is this documented? is there commented-out code in here?
And then it hit me: maybe I don't need to express them in code anymore.
If you also think there were spells involved, you should subscribe:
-
π Aider-AI/aider v0.72.3.dev release
set version to 0.72.3.dev
-
π Aider-AI/aider v0.72.2 release
version bump to 0.72.2
-
π @HexRaysSA@infosec.exchange IT'S HERE! We've just launched our new Hex-Rays IDA Community Forum on mastodon
IT'S HERE! We've just launched our new Hex-Rays IDA Community Forum on @discourse. Here you can find company news, product updates, and much more.
Join the conversation to let us know what's on your mind, access our repository for quick answers, and connect with fellow reversers.
-
π Locklin on science Some interesting open problems in technology rss
These are not open problems in the sciences, just some ideas for useful technological breakthroughs that should be within reach, but haven’t been done yet. There may be obvious reasons they can’t be done, but I suspect they can be done with some creative thinking or iterating on something that can be built today. Just […]
-
π jellyfin/jellyfin 10.10.4 release
π Jellyfin Server 10.10.4
We are pleased to announce the latest stable release of Jellyfin, version 10.10.4!
This minor release brings several bugfixes to improve your Jellyfin experience.
As always, please ensure you stop your Jellyfin server and take a full backup before upgrading!
You can find more details about and discuss this release on our forums.
Changelog (20)
π General Changes
- Never treat matroska as webm for audio playback [PR #13345], by @gnattu
- Don't generate trickplay for backdrops [PR #13183], by @gnattu
- Use nv15 as intermediate format for 2-pass rkrga scaling [PR #13313], by @gnattu
- Fix DTS in HLS [PR #13288], by @Shadowghost
- Transcode to audio codec satisfied other conditions when copy check failed. [PR #13209], by @gnattu
- Fix missing episode removal [PR #13218], by @Shadowghost
- Fix NFO ID parsing [PR #13167], by @Shadowghost
- Always do tone-mapping for HDR transcoding when software pipeline is used [PR #13151], by @nyanmisaka
- Fix EPG image caching [PR #13227], by @Shadowghost
- Don't use custom params on ultrafast x265 preset [PR #13262], by @gnattu
- Backport ATL update 6.11 to 10.10 [PR #13280], by @gnattu
- Don't fall back to ffprobe results for multi-value audio tags [PR #13182], by @gnattu
- Backport ATL update to 10.10 [PR #13180], by @gnattu
- Properly check LAN IP in HasRemoteAccess [PR #13187], by @gnattu
- Fix possible infinite loops in incomplete MKV files [PR #13188], by @Bond-009
- Check if the video has an audio track before codec fallback [PR #13169], by @gnattu
- Fallback to lossy audio codec for bitrate limit [PR #13127], by @gnattu
- Fix missing ConfigureAwait [PR #13139], by @gnattu
- Only do DoVi remux when the client supports profiles without fallbacks [PR #13113], by @gnattu
- Enable RemoveOldPlugins by default (10.10.z backport) [PR #13106], by @RealGreenDragon
-
π Rust Blog Rust 2024 in beta channel rss
[](https://blog.rust-lang.org/2025/01/22/rust-2024-beta.html#rust-2024-in-
beta-channel)Rust 2024 in beta channel
The next edition, Rust 2024, has entered the beta channel. It will live there until 2025-02-20, when Rust 1.85 and Rust 2024 will be released as stable.
We're really happy with how Rust 2024 has turned out, and we're looking forward to putting it in your hands.
You can get a head start in preparing your code for the new edition, and simultaneously help us with final testing of Rust 2024, by following these steps within a project:
- Run
rustup update beta
. - Run
cargo update
. - Run
cargo +beta fix --edition
. - Set
edition = "2024"
and, if needed,rust-version = "1.85"
, inCargo.toml
. - Run
cargo +beta check
, address any remaining warnings, and then run other tests.
More details on how to migrate can be found here and within each of the chapters describing the changes in Rust 2024. For more on the changes themselves, see the Edition Guide.
If you encounter any problems or see areas where we could make the experience better, tell us about it by filing an issue.
- Run
-
- January 21, 2025
-
π astral-sh/uv 0.5.22 release
Release Notes
Enhancements
- Include version and contact information in GitHub User Agent (#10785)
Performance
- Add fast-path for recursive extras in dynamic validation (#10823)
- Fetch
pyproject.toml
from GitHub API (#10765) - Remove allocation in Git SHA truncation (#10801)
- Skip GitHub fast path when full commit is already known (#10800)
Bug fixes
- Add fallback to build backend when
Requires-Dist
mismatches (#10797) - Avoid deserialization error for paths above the root (#10789)
- Avoid respecting preferences from other indexes (#10782)
- Disable the distutils setuptools shim during interpreter query (#10819)
- Omit variant when detecting compatible Python installs (#10722)
- Remove TOCTOU errors in Git clone (#10758)
- Validate metadata under GitHub fast path (#10796)
- Include conflict markers in fork markers (#10818)
Error messages
- Add tag incompatibility hints to sync failures (#10739)
- Improve log when distutils is missing (#10713)
- Show non-critical Python discovery errors if no other interpreter is found (#10716)
- Use colors for lock errors (#10736)
Documentation
- Add testing instructions to the AWS Lambda guide (#10805)
Install uv 0.5.22
Install prebuilt binaries via shell script
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/astral-sh/uv/releases/download/0.5.22/uv-installer.sh | sh
Install prebuilt binaries via powershell script
powershell -ExecutionPolicy ByPass -c "irm https://github.com/astral-sh/uv/releases/download/0.5.22/uv-installer.ps1 | iex"
Download uv 0.5.22
File | Platform | Checksum
---|---|---
uv-aarch64-apple-darwin.tar.gz | Apple Silicon macOS | checksum
uv-x86_64-apple-darwin.tar.gz | Intel macOS | checksum
uv-i686-pc-windows-msvc.zip | x86 Windows | checksum
uv-x86_64-pc-windows-msvc.zip | x64 Windows | checksum
uv-aarch64-unknown-linux-gnu.tar.gz | ARM64 Linux | checksum
uv-i686-unknown-linux-gnu.tar.gz | x86 Linux | checksum
uv-powerpc64-unknown-linux-gnu.tar.gz | PPC64 Linux | checksum
uv-powerpc64le-unknown-linux-gnu.tar.gz | PPC64LE Linux | checksum
uv-s390x-unknown-linux-gnu.tar.gz | S390x Linux | checksum
uv-x86_64-unknown-linux-gnu.tar.gz | x64 Linux | checksum
uv-armv7-unknown-linux-gnueabihf.tar.gz | ARMv7 Linux | checksum
uv-aarch64-unknown-linux-musl.tar.gz | ARM64 MUSL Linux | checksum
uv-i686-unknown-linux-musl.tar.gz | x86 MUSL Linux | checksum
uv-x86_64-unknown-linux-musl.tar.gz | x64 MUSL Linux | checksum
uv-arm-unknown-linux-musleabihf.tar.gz | ARMv6 MUSL Linux (Hardfloat) | checksum
uv-armv7-unknown-linux-musleabihf.tar.gz | ARMv7 MUSL Linux | checksum -
π Evan Schwartz Comparing 13 Rust Crates for Extracting Text from HTML rss
Applications that run documents through LLMs or embedding models need to clean the text before feeding it into the model. I'm building a personalized content feed called Scour and was looking for a Rust crate to extract text from scraped HTML. I started off using a library that's used by a couple of LLM-related projects. However, while hunting a phantom memory leak, I built a little tool (emschwartz/html-to-text-comparison) to compare 13 Rust crates for extracting text from HTML and found that the results varied widely.
TL;DR:
lol_html
is a very impressive HTML rewriting crate from Cloudflare andfast_html2md
is a newer HTML-to-Markdown crate that makes use of it. If you're doing web scraping or working with LLMs in Rust, you should take a look at both of those.Approaches
At a high level, there are 3 categories of approaches we might use for cleaning HTML:
- HTML-to-text - as the name suggests, these crates convert whole HTML documents to plain text and were mostly developed for use cases like rendering HTML emails in terminals.
- HTML-to-markdown - these crates convert the HTML document to markdown and were built for a variety of uses, ranging from displaying web pages in terminals to general web scraping and LLM applications.
- Readability - the final set of crates are ports of the mozilla/readability library, which is used for the Firefox Reader View. These attempt to extract only the main content from the page by scoring DOM elements using a variety of heuristics.
Any of these should work for an LLM application, because we mostly care about stripping away HTML tags and extraneous content like scripts and CSS. I say "should" because some of these crates definitely do not work as well as you might expect.
Parsers
While there are a variety of different crates for extracting text from HTML, 10 out of the 13 I'm testing use the same underlying library for parsing the HTML:
html5ever
. This crate was developed as part of the Servo project and, as the download count suggests, it is used by many different libraries and applications.
The catch when using
html5ever
, however, is that it does not ship with a DOM tree implementation. The Servo project does have a simple tree implementation using reference-counted pointers that is used for their tests. It comes with this warning though:This crate is built for the express purpose of writing automated tests for the html5ever and xml5ever crates. It is not intended to be a production-quality DOM implementation, and has not been fuzzed or tested against arbitrary, malicious, or nontrivial inputs. No maintenance or support for any such issues will be provided. If you use this DOM implementation in a production, user-facing system, you do so at your own risk.
Despite the scary disclaimer, the
markup5ever_rcdom
is used by plenty of libraries, including 7 out of the 10 crates I'm testing that use
html5ever
. The other 3 use DOM tree implementations fromscraper
,dom_query
, andkuchiki
(note thatkuchiki
is archived and unmaintained but Brave maintains a fork of it calledkuchikiki
).Of the 3 remaining crates that do not use
html5ever
, two use custom HTML parsers and the third uses Cloudflare'slol_html
streaming HTML rewriter. We'll talk more about
lol_html
below.The Competitors
Crate Output Parser Tree Notable Users License august
Text html5ever
markup5ever_rcdom
MIT boilerpipe
Text html5ever
scraper::Html
MIT dom_smoothie
Readability html5ever
dom_query::Tree
MIT fast_html2md
Markdown lol_html
N/A Spider MIT htmd
Markdown html5ever
markup5ever_rcdom
Swiftide Apache-2.0 html2md
Markdown html5ever
markup5ever_rcdom
Atomic Data, ollama-rs
, LemmyGPL-3.0+ html2md-rs
Markdown Custom Custom MIT html2text
Text html5ever
markup5ever_rcdom
Lemmy, various terminal apps MIT llm_readability
Readability html5ever
markup5ever_rcdom
Spider MIT mdka
Markdown html5ever
markup5ever_rcdom
Apache-2.0 nanohtml2text
Text Custom Custom MIT readability
Readability html5ever
markup5ever_rcdom
langchain-rust
, Kalosm,llm_utils
MIT readable-readability
Readability html5ever
kuchiki::Node
hackernews_tui
MIT Test Criteria
Some of the criteria to care about when selecting an HTML extraction library are:
- Correct Content - whether the output contains the text you care about for any given website (this is a key criteria -- and not to be taken for granted, as we'll see in the results).
- Text Size - the total size of the output -- though of course what you really care about is how much extraneous content is included on top of the main content.
- Speed or Throughput - how fast it processes a given input file size. Note that with scraping, the processing time will be dwarfed by the latency of the actual network request.
- Memory Usage - depending on your application and how many pages you are scraping, you may care more or less about the total memory usage.
- Format - if you are using the cleaned text for an LLM application, you may not care too much about the correctness of the markdown or text formatting. For other types of applications, this obviously matters more.
Unlike when I was testing bitwise Hamming Distance implementations, I am not using Criterion for benchmarking this time. The output of these crates is not expected to be exactly equivalent and speed is not the only criteria that I wanted to compare.
Test Results
The test tool, emschwartz/html-to-text-comparison, is set up so that you can point it at any website and it will dump the output from each crate into a text file while printing various stats about each crate's run.
cargo install --locked --git https://github.com/emschwartz/html-to-text-comparison html-to-text-comparison https://example.com
I would encourage you to try it yourself but here are the results from a couple different types of websites:
Hacker News Front Page
Name Time (microseconds) Peak Memory (bytes) Peak Memory as % of HTML Size Output Size (bytes) % Reduction Output File august 2015 70809 191.40% 6411 82.67% out/august.txt boilerpipe1830 125587 339.46% 66 99.82% π€ out/boilerpipe.txt dom_smoothie 6458 200729 542.57% 5950 83.92% out/dom_smoothie.txt fast_html2md 1406 4806 12.99% 11093 70.02% out/fast_html2md.txt htmd 1789 38549 104.20% 11097 70.00% out/htmd.txt html2md14312 918503 2482.71% 3823657 -10235.33% π€― out/html2md.txt html2md-rs 1472 85923 232.25% 16792 54.61% out/html2md-rs.txt html2text 3028 100981 272.95% 268567 -625.94% out/html2text.txt llm_readability3852 72949 197.18% 0 100.00% π€ out/llm_readability.txt mdka1291 35315 95.46% 1 100.00% π€ out/mdka.txt nanohtml2text 606 6975 18.85% 10648 71.22% out/nanohtml2text.txt readability4129 67139 181.48% 11 99.97% π€ out/readability.txt readable-readability 1820 131031 354.18% 3750 89.86% out/readable-readability.txt Some of these are crossed out because the output is completely wrong. For example,
llm_readability
andmdka
produced empty strings,readability
produced only the string"Hacker News"
, andboilerpipe
produced"195 points by recvonline 4 hours ago | hide | 181Β comments\n15."
.html2md
exploded and outputted a file that was 100x larger than the original HTML, mostly filled with whitespace.mozilla/readability
Github RepoName Time (microseconds) Peak Memory (bytes) Peak Memory as % of HTML Size Output Size (bytes) % Reduction Output File august 6546 214932 62.55% 12916 96.24% august.txt boilerpipe6574 340102 98.97% 266 99.92% boilerpipe.txt dom_smoothie 12428 498327 145.02% 6446 98.12% dom_smoothie.txt fast_html2md 3649 6317 1.84% 14607 95.75% fast_html2md.txt htmd 6388 160433 46.69% 14071 95.91% htmd.txt html2md 7368 200740 58.42% 89019 74.09% html2md.txt html2md-rs 4355 242241 70.50% 17650 94.86% html2md-rs.txt html2text 7548 244119 71.04% 28699 91.65% html2text.txt llm_readability5039 144964 42.19% 19 99.99% llm_readability.txt mdka6172 206179 60.00% 6948 97.98% mdka.txt nanohtml2text 2660 85684 24.94% 18779 94.54% nanohtml2text.txt readability6056 151532 44.10% 53 99.98% readability.txt readable-readability6000 212956 61.97% 53 99.98% readable-readability.txt As in the previous test, I've crossed out the crates that completely missed the mark. This time, All of the failing implementations seemed to focus on the wrong HTML element(s). For example,
readability
andreadable-readability
produced only the string"You canβt perform that action at this time."
Rust Lang Blog
Name Time (microseconds) Peak Memory (bytes) Peak Memory as % of HTML Size Output Size (bytes) % Reduction Output File august 893 52032 240.09% 12601 41.86% august.txt boilerpipe 934 101874 470.07% 5660 73.88% boilerpipe.txt dom_smoothie 2129 129626 598.13% 6649 69.32% dom_smoothie.txt fast_html2md 639 5108 23.57% 13102 39.54% fast_html2md.txt htmd 798 20549 94.82% 11958 44.82% htmd.txt html2md 801 65159 300.66% 13498 37.72% html2md.txt html2md-rs311 35988 166.06% 21 99.90% html2md-rs.txt html2text 1177 38758 178.84% 13574 37.37% html2text.txt llm_readability 2733 55464 255.92% 5870 72.91% llm_readability.txt mdka 895 19169 88.45% 13147 39.34% mdka.txt nanohtml2text 234 5345 24.66% 12866 40.63% nanohtml2text.txt readability 2252 54610 251.98% 5801 73.23% readability.txt readable-readability 609 80610 371.95% 6561 69.73% readable-readability.txt This is a more straightforward blog page and this time only one crate got it completely wrong (
html2md-rs
produced"<noscript></noscript>"
).Conclusion
The first conclusion we should draw from this text is that it is extremely important to check the output of your HTML cleaning library. Some of the libraries tested here are widely used and yet they completely failed to find the important content on the pages we looked at. If you're building an application with an LLM and get strange results, you should spot check some of the text you're feeding in.
If we remove the contenders that completely failed any of these tests, we're left with:
august
dom_smoothie
fast_html2md
htmd
html2text
nanohtml2text
You might have a fine experience building with any of these, but I would choose to narrow this list down further based on a combination of their performance and manually inspecting their outputs:
fast_html2md
This library does a reasonable job transforming the HTML into markdown while being among the fastest performers and maintaining extremely low memory usage.
In the tests above, it kept its memory footprint between 5-6kb, independent of the input size. This is impressive but unsurprising given that the underlying HTML library,
lol_html
, lets you tune the memory settings.The blog post A History of HTML Parsing at Cloudflare: Part 2 gives more detail on the history and architecture of
lol_html
. If you're doing any kind of HTML manipulation, I would recommend reading that post and trying out their library.dom_smoothie
While this has much higher memory usage than
fast_html2md
, and far fewer downloads at the time of writing, it is the only Readability implementation that correctly found the main text in the very limited subset of websites I tested. If you want to make sure you only include the text and none of the headers or other content, this might be the crate for you.Appendix: HTML-to-Markdown with a Language Model
Jina has a couple of small language models designed to convert HTML to markdown. They are available on Hugging Face under a Creative Commons Non-Commercial license and via their API for commercial uses.
Depending on your use case, you might also want to try them out. The API-based version is included in the comparison tool under an optional feature flag. However, I left them out of the main comparison because the memory usage is going to be considerably higher than any of the Rust crates and the models are not freely available.
Discuss on Hacker News, Lobsters, or r/rust.
Subscribe via RSS or on πΏοΈ Scour.
-
π r/reverseengineering Denuvo Analysis rss
submitted by /u/p0xq
[link] [comments] -
π The Pragmatic Engineer Are LLMs making StackOverflow irrelevant? rss
Hi, this is Gergely with a bonus issue of the Pragmatic Engineer Newsletter. In every issue, I cover topics related to Big Tech and startups through the lens of engineering managers and senior engineers. This article is one out of five sections from[ _The Pulse
119._](https://newsletter.pragmaticengineer.com/p/the-
pulse-119?ref=blog.pragmaticengineer.com)Full subscribers received this issue a week and a half ago. To get articles like this in your inbox,subscribe here .
The volume of questions asked on StackOverflow started to fall quickly after ChatGPT was released in November 2022, and the drop continues into 2025 at alarming speed. Fresh data shows how bad things are, courtesy of software engineer, Theodore R. Smith, a top 1% StackOverflow contributor. He shared the number of questions posted by users in this Gist dump:
Number of questions asked per month on StackOverflow. Data source:this Gist
StackOverflow has not seen so few questions asked monthly since 2009! The graph shows the steep drop-off in usage accelerated with the launch of OpenAi's chatbot, and It's easy enough to figure out why: LLMs are the fastest and most efficient at helping developers to get "unstuck" with coding.
Before the rise of this technology, StackOverflow was the superior option to Googling in the hope of finding a blog post which answered a question. And if you couldn't find an answer to a problem, you could post a question on StackOverflow and someone would probably answer it.
StackOverflow 's decline actually started before ChatGPT, even though it's easy to blame for the fall in questions asked:
Data source:this Gist
In April 2020 - a month into the Covid-19 pandemic - StackOverflow saw a short-lived surge in usage. However, from around June 2020, the site saw a slow, steady decline in questions asked. ChatGPT merely sped up the decline in the number of questions.
From 2018, StackOverflow drew more criticism for its moderation policies. On one hand: StackOverflow relies on moderators to de-duplicate questions, close off-topic posts, and to keep things civil. But moderation came to feel beginner-unfriendly, where newcomers struggled to post questions that were not immediately closed by a moderator. Asking a question that would stay open became an effort in itself, which was intentional. But it's easy enough to see why a higher barrier to asking questions resulted in fewer questions being posted.
StackOverflow seemingly stopped innovating - and this might have resulted in the initial drop in questions. As reader Patrick Burrows noted in the comments of the original article:
"Stack Overflow never made a transition to video answers (I'm not aware if they even tried) which probably accounts for the beginning of their declining popularity. Like it or not, young people (including young programmers) are more comfortable watching videos for answers than reading text. To this day you can't ask or answer a question easily with a video.
Stack Overflow management and executives should have recognized that trend and kept up-to-date. They can point to LLMs as killing their business if they want (and I'm sure they will), but they hadn't been attempting to stay relevant, modernize, or update their product.
(personally, I hate having to watch videos for answers to things... but I'm old.)"
And it's not just video. Around 2020, developers started to join programming groups on Discord or Telegram: places where asking questions were much more easygoing than on StackOverflow. Just like StackOverflow had no response to the rise of video Q&A: the product did not respond to the likes of Discord. If I'm being honest: the product stopped innovating.
The decline was visible enough a year ago, when we last looked. A year ago, I asked if reports of StackOverflow's downfall were exaggerated. Back then, the data looked grim:
Statistics, as shared with StackOverflow community members with reputations of 25,000+. Data source:The Fall of Stack Overflow
At the time, the company blamed some of the decline on search engine traffic. However, a year later, it's safe to assume StackOverflow needs a miracle for developers to start asking questions again in the same numbers as before.
The drop in questions indicates that trouble is ahead. Most of StackOveflow's traffic comes from search engines, so this decline is unlikely to have an equally dramatic immediate drop in visits. However, any fall can turn into a vicious cycle: with fewer questions asked, the content on the site becomes dated and less relevant, as fewer questions mean fewer up-to-date answers. In turn, the site gets less search engine traffic, and visitors who get to the site via search find answers woefully out of date.
StackOverflow 's decline is an example of how disruptive GenAI can be to previously stable businesses. StackOverflow was acquired for $1.8B in 2021 by private equity firm Prosus, and even with moderate traffic decline, the site had been one of the most trusted websites for software engineers, making it a valuable asset. But the new data indicates an irreversible decline, and it's hard to see how StackOverflow will be relevant in future.
StackOverflow still sells a Teams product for internal Q&A. Still, the fall of the public-facing StackOverflow traffic suggests that former users prefer using internal LLMs at the companies to ask questions, rather than use an StackOverflow-like site.
Private equity often has a reputation for acquiring companies at lowest possible prices, then squeezing money out of them. In the case of StackOverflow, we might see the opposite: a private equity company taking a gamble with a large acquisition, and getting a sizable loss.
Another question: where will LLMs get coding Q &A training data in the future? In some ways, it feels to me that StackOverflow is the victim of LLMs ingesting data on its own Q&A site, and providing a much better interface for developers to solve programming problems with. But now the site gets far fewer questions and answers, where will training data come from?
This is a question with no clear answer, that's similar to the one about where the next generation of entry-level software engineers will come from, when most businesses hire fewer than before because LLMs can do roughly the same job as a newly qualified human?
I expect the industry will adapt: perhaps LLMs in the future won't be as good as today in answering StackOverflow-like questions, but will have other more advanced capabilities to make up for it; like trying various solutions and validating them, or coding agents might become more helpful.
The same applies to the question of entry-level engineers: the tech industry has always adapted, and I don't see it being different this time, either.
The full The Pulse issue additionally covers:
- Industry pulse. Fake GitHub stars on the rise, Anthropic to raise at $60B valuation, JP Morgan mandating 5-day RTO while Amazon struggles to find enough space for the same, Devin less productive than on first glance, and more.
- Apples fires staff over fake charities scam. In order to get around $4,000 per year in additional tax cuts, six Apple employees tried to defraud Apple - and the IRS. They were caught, fired, and now face prosecution. A reminder that getting "clever" with corporate perks can wreck otherwise lucrative careers at Big Tech.
- AI models just keep improving rapidly. Two months after wondering whether LLMs have hit a plateau, the answer seems to be a definite "no." Google's Gemini 2.0 LLM and Veo 2 video model is impressive, OpenAI previewed a capable o3 model, and Chinese startup DeepSeek unveiled a frontier model that cost less than $6M to train from scratch.
- Middle manager burnout incoming? A Forbes article suggests a broader middle manager burnout to come across most professional sectors. This could simply be a consequence of higher interest rates, teams growing less, and more pressure on managers. It's tougher to be an engineering manager, than it has been during the 2010-2022 period, that's for sure.
-
π News Minimalist One-third of Arctic carbon sinks now emit + 2 more stories rss
Today ChatGPT read 17797 top news stories. After removing previously covered events, there are 3 articles with a significance score over 5.9.
[6.2] Study finds one-third of Arctic carbon sinks now emit CO2 due to warming βtheguardian.com
A new study reveals that one-third of the Arctic's tundra, forests, and wetlands have shifted from being carbon sinks to sources of carbon emissions due to global warming. This change marks a significant transformation in the region's ecosystems.
The research indicates that over 30% of the Arctic is now a net source of CO2, increasing to 40% when including wildfire emissions. Monitoring data from 200 sites between 1990 and 2020 shows how warming is affecting the landscape.
Despite some areas becoming greener, thawing permafrost is releasing stored carbon. The study highlights the need for better monitoring of the Arctic's carbon cycle as it undergoes rapid changes.
[6.5] Trump sworn in for second term βsun- sentinel.com
President Donald Trump was inaugurated for a second term on January 20, 2025, delivering a speech from the U.S. Capitol. He declared the start of a "golden age" for America, emphasizing a focus on national sovereignty, safety, and restoring trust in government.
Trump announced immediate actions, including a national emergency at the southern border, reinstating strict immigration policies, and declaring cartels as foreign terrorist organizations. He also pledged to address inflation and energy issues, aiming to revive American manufacturing and revoke previous environmental regulations.
The president highlighted a commitment to free speech, military strength, and a merit-based society.
[5.9] NATO deploys Norwegian F-35 jets to Poland for the first time βeconomictimes.indiatimes.com
NATO has deployed Norwegian F-35 fighter jets in Poland for the first time to defend against Russian missile and drone attacks on Ukraine.
Poland has increased its military readiness, deploying additional fighter jets and activating ground-based air defenses. The Polish military detected significant Russian air activity, prompting these measures to safeguard its airspace from potential threats.
Highly covered news with significance over 5.4
[5.7] World leaders gather in Davos for World Economic Forum
(dw.com + 30)[5.6] Weight-loss drugs reduced risk for 42 health outcomes, while increasing risk for 19 conditions, study involving 2 million people finds
(economictimes.indiatimes.com + 18)[5.5] Trump signs executive orders to reverse federal gender protections and DEI programs
(wmur.com + 122)[5.5] Trump announces U.S. withdrawal from Paris climate agreement and energy emergency plan
(bbc.com + 119)[5.5] Oyster proteins show potential in fighting drug-resistant bacteria, study finds
(theconversation.com + 4)[5.5] Oxfam reports surge in billionaire wealth as first trillionaires loom
(nbcnews.com + 28)Thanks for reading!
You can create your own significance-based RSS feed with News Minimalist premium.
Vadim
-
π sacha chua :: living an awesome life Hyperlinking SVGs rss
-