28 Dec 2023
My parents’ FRITZ!Box 7530 (FRITZ!OS version 7.57) voicebox had “filled up” and before clearing all messages I was looking for a way to download all of them as a backup.
There is no really convenient way to do this in the web UI (there were close to 200 messages, some dating back more than 4 years), so I resorted to JavaScript to print a list of curl
commands to download each message as a .wav
file.
Messages are listed in a table with the following format:
<tr class="thead">
<th class="sortable"></th>
<th class="sortable sort_by_date">Date<span class="sort_no"> </span></th>
<th class="sortable">Name/Number<span class="sort_no"> </span></th>
<th class="sortable">Your Number<span class="sort_no"> </span></th>
<th class="sortable">Duration<span class="sort_no"> </span></th>
<th></th>
</tr>
<tr>
<td class="newicon" datalabel="28.12.23 16:00"> </td>
<td>28.12.23 16:00</td>
<td datalabel="Name/Number">Caller's Name</td>
<td datalabel="Your Number">1234</td>
<td datalabel="Dauer">< 1 Min</td>
<td class="btncolumn" datalabel="">
<button type="submit" class="icon fon_book" id="" name="" value="" title="" disabled=""><img src="/assets/icons/ic_fonbook_add.svg" alt=""></button>
<a class="download icon" href="/cgi-bin/luacgi_notimeout?sid=<session uuid>&script=%2Flua%2Fphoto.lua&myabfile=%2Fdata%2Ftam%2Frec%2F<recording id>">
<button type="submit" class="icon audio" id="play_1" name="play" value="1" title="Play message/Save">
<img src="/assets/icons/ic_triangle_right_blue.svg" alt="Play message/Save">
</button>
<audio preload="auto"></audio>
</a>
<button type="submit" class="icon delete" id="delete_1" name="delete" value="1" title="Delete"></button>
</td>
</tr>
...
We want to ignore the header row and for every subsequent row, get the date and the caller’s name or number, to build a meaningful file name for the to-be-downloaded .wav
file, and the URL of the link. The link points to a script that emits the raw WAV audio as a binary stream.
This is a quick & dirty piece of JavaScript to paste into the JavaScript console of Chrome’s Developer tools (keyboard shortcut Ctrl-Shift-I) to generate a list of curl
invocations to download each message with a useful file name:
console.log(Array.from(document.querySelectorAll('table#uiTamCalls tr:not(.thead)')).map(e => {
return `curl -s -o "${e.childNodes[1].innerText} ${e.childNodes[2].innerText}.wav" '${e.querySelector('a').href}'`;
}).join('\n'));
My initial attempt was downloading the file via the browser by invoking .click()
on each link, however that turned out to work less well as Chrome doesn’t really like downloading files over a non-secure connection, requires you to allow multiple simultaneous downloads via a popup box and even if you do, will only download 10 files at a time.
YMMV if you have a different FRITZ!Box model or FRITZ!OS version, however I’d expect a similar approach should still work.
01 Jan 2021
apt-key
is deprecated,
however there is no replacement available, neither does the man
page document
how to replace the commands apt-key
provides. Here is my attempt.
Note: gpg
will by default create new keyrings in the (new) “GPG keybox
database version 1”, whereas apt
expects them in the (legacy) “PGP/GPG key
public ring (v4)” format. Specify the prefix gnupg-ring:
for the keyring file
to make gpg
use the legacy v4 format.
apt-key list
This commands lists all keys stored in /etc/apt/trusted.gpg
and any .gpg
or
.asc
files in /etc/apt/trusted.gpg.d
.
for f in /etc/apt/trusted.gpg /etc/apt/trusted.gpg.d/*.{asc,gpg}; do
gpg --list-keys --keyid-format short --no-default-keyring --keyring $f
done
apt-key adv
This command is used to download a key and store it in the “right” keyring.
apt-key adv
merges all keyrings into one, downloads the new key(s) and then
merges back the changes. No need to replicate this setup.
Updating an expired key
If you’re updating an expired key, write it to the same keyring, replacing the
expired key. To find any keyrings containing an expired key, run the following:
for f in /etc/apt/trusted.gpg /etc/apt/trusted.gpg.d/*.{asc,gpg}; do
$(gpg --list-keys --no-default-keyring --keyring $f | fgrep -iq expired) && echo "Expired key in $f"
done
Once you’ve identified the keyring and key ID, download the new key:
sudo gpg --recv-keys --no-default-keyring --keyring=gnupg-ring:/etc/apt/trusted.gpg.d/<FILENAME>.gpg --keyserver keys.gnupg.net <KEY_ID>
Downloading a new key
When downloading a new key, create a new keyring in /etc/apt/trusted.gpg.d
.
Note that on recent versions of gpg
, this keyring will be in “GPG keybox
database version 1” format, which is incompatible with apt-key
.
Choose a suitable filename for the new keyring and download the key:
sudo gpg --recv-keys --no-default-keyring --keyring=gnupg-ring:/etc/apt/trusted.gpg.d/<FILENAME>.kbx --keyserver keys.gnupg.net <KEY_ID>
30 Dec 2020
When you interact with Google Support via chat, you can ask for a transcript of
the conversation to be sent to you. That transcript is in PDF format though. If
that’s not suitable for you or you forgot to request a transcript, there is a
way out.
Open the chat in its own pop-out window via the arrow icon on the right of the
blue header bar. Then open the Chrome developer console
(Ctrl-Shift-J), paste the following JavaScript
code and hit enter to copy a transcript of the conversation to your clipboard.
const copyToClipboard = str => {
const el = document.createElement('textarea');
el.value = str;
document.body.appendChild(el);
el.select();
document.execCommand('copy');
document.body.removeChild(el);
};
copyToClipboard(Array.from(document.querySelectorAll('.chatsupport_cbf_qb')).map(e => {
t = new Date(parseInt(e.getAttribute('ts'))).toISOString();
if (e.querySelector('.systemMessageUserWrapper')) {
return `[${t}] ${e.querySelector('.systemMessageUserWrapper').innerText}`;
} else {
n = e.querySelector('.chatsupport_cbf_rb').getAttribute('aria-label');
m = e.querySelector('.chatsupport_cbf_ob').innerText;
return `[${t}][${n}]: ${m}`;
}
}).join('\n'));
23 May 2020
Python packages are distributed via the Python Package Index (PyPI). The
Python Packaging Guide provides details for uploading your project to PyPI.
However the packaging guide is missing instructions for uploading to the
PyPI test server. This is recommended as a trial run for uploading a new
version of your package, since you upload a given version of your project only
once. You can use the test server to e.g. verify your README
renders
correctly.
Assuming you have finished all the work on the new release of your project,
written the release notes, increased the version number, tagged the release
and are ready to publish, follow theses steps:
-
If you haven’t yet, create accounts on PyPI and the PyPI test server.
Note that these accounts are entirely independent, however you may want to
use the same username (but different passwords, of course).
-
For security reasons it is strongly recommended to create an API token
instead of using your username and password when uploading a package to
PyPI. If you haven’t done so, create an API token on both PyPI and
the PyPI test server.
You can choose to restrict this token to only a single package, which you
should definitely do if you use the API token e.g. in a CI/CD workflow.
For your personal use I suggest you leave the token unrestricted, since
there is no good workflow for switching between multiple API tokens.
Note that the API token will only be displayed once when you create it,
so make sure you copy the token. If you forget to do that, revoke and
create a new one.
-
Create a .pypirc
file in your home directory to store your API tokens
for authentication when uploading, with the following content:
[pypi]
username = __token__
password = pypi-AgEIcH...
[testpypi]
username = __token__
password = pypi-AgENdG...
-
If you haven’t done so, install twine: pip install --upgrade twine
.
-
Create a source distribution and a wheel for your package:
python setup.py sdist bdist_wheel
-
Run twine check
on your package files and ensure they pass:
$ twine check dist/*
Checking dist/dokuwikixmlrpc-2020.5.23-py2.py3-none-any.whl: PASSED
Checking dist/dokuwikixmlrpc-2020.5.23.tar.gz: PASSED
This command will report any problems rendering your README
.
-
Upload your packages to the PyPI test server:
twine upload --repository testpypi dist/*
You should not be prompted for a username or password, since those are
configure in your .pypirc
. When successful, this should print the URL of
your package on the test server:
Uploading distributions to https://test.pypi.org/legacy/
Uploading dokuwikixmlrpc-2020.5.23-py2.py3-none-any.whl
100%|████████████████████████████| 10.1k/10.1k [00:01<00:00, 5.23kB/s]
Uploading dokuwikixmlrpc-2020.5.23.tar.gz
100%|████████████████████████████| 9.63k/9.63k [00:01<00:00, 9.39kB/s]
View at:
https://test.pypi.org/project/dokuwikixmlrpc/2020.5.23/
Verify everything is as you expect e.g. there are no rendering errors.
-
Upload your packages to PyPI, fo realz:
You should not be prompted for a username or password, since those are
configure in your .pypirc
. When successful, you should see output similar
to the above.
Congratulations! You’ve done it! You package is now available to the world 🎉
22 Nov 2015
A very frosty November weekend marked the end of Parliament Week and the
fifth anniversary of the Accountability Hack, originally named UK Parliament
Hack, organised by Tracy Green from the Parliament Digital Service, Nick
Halliday from the National Audit Office and Terry Makewell from the
Office for National Statistics with very active support from the
RebelUncut crew.
Hackers and “armchair auditors” were invited to tackle four different
challenges using a diverse set of open data sources:
- NAO: Use spend data and any other data set to improve accountability.
- Parliament: Best use of linked data to improve accountability.
- ONS: Use the ONS OpenAPI to improve accountability.
- Wildcard: Use any three open data sets to improve accountability.
Many prospective participants were deterred by either the freezing cold or by
issues with public transport, so not that many heard Meg Hillier MP give the
introductory address. After that, ideas were thrown around and teams started
forming. I joined Natalia, Mina and Emma, a brilliant trio who were
working on a visualisation of Parliamentary Questions Answered and were
looking for some help with crunching the data and classifying the quality of
answers.
We first of all needed to pull all data from the Parliament’s Linked Data
API in JSON format. Downloading all 63000 questions in batches of 500 (which
is the maximum batch size the API allows unfortunately) by hand was of course
not an option, so I started by implementing a download script in Python.
Pulling down all questions took several hours due to the rather poor
performance of the API.
In the evening, Kevin ran a very entertaining round of the MLH !LIGHT
challenge, where each contestant has 15 minutes to re-create a given website
(in our case it was the bootstrap front page) using a very bare bones
browser based editor with no syntax highlighting or auto completion. No
navigating away from the tab to bring up help and you don’t get to see a
rendered preview of your creation until after you submitted.
The overnight stay at the NAO was again quite comfortable and we could use a
shower in the morning. My team from the Saturday did sadly not come back,
however John Sandall, a good friend and brilliant data scientist, arrived in
time for breakfast. We discussed classifying answer quality with an N-gram
analysis using a list of previously identified phrases commonly used to defer
questions. An alternative would be training a text analysis model on the
entire text corpus based on a training set of manually classified answers.
Before getting to work on that we needed to transform the raw data into a
suitable form and identify which attributes were relevant for our analysis.
Halfway through doing that I realised the answer text was missing from the
data and found out it was due to passing a query parameter to the API
(_view=all
), which included extra fields, but left out the actual answer
data. By that point it was unrealistic to be able to rerun the entire download
in time for show & tell.
John did however still manage to run some statistical analysis on the data to
answer many interesting questions. Meanwhile I turned my download script into
a “proper” Python package using PyScaffold, uploaded the package to PyPI
and the documentation to Read the Docs - just in time for going up on
stage!
Quite a few extra spectators came along to attend the show & tell with a
rather impressive lineup of 16 projects! I was on stage twice: First to
present the results of our analysis of “Any Questions Answered?”, which had
revealed some interesting insights. Jim Shannon MP asked the most questions,
the Department of Health had to answer the most. The Foreign & Commonwealth
Office was the slowest to respond. Nick Clegg’s questions were ignored the
longest and the Prime Minister referred the highest proportion of questions.
With more time to build up a training set by categorising some questions
manually e.g. for quality or difficulty, we could have trained a Bayes
classifier for the entire corpus.
Later I went on stage again to present DDPy, a command line interface to
interact with the Parliament Linked Data API, which had evolved out of my
download script to pull the Parliamentary Questions Answered. I decided to
solve this once and for all and, wrote a generic downloader in Python and put
it on PyPI. Now anyone can easily download any data set after a simple
The judges apparently came away quite impressed with both our presentations
since we received an honourable mention for the “Best Analysis of Parliamentary
Data” for Any Questions Answered? and the “Best Tool for the
Community” for DDPy. As if that wasn’t enough, I was quite touched for also
being awarded a “Community Spirit Prize”. It was (and continues to be) an
honour and pleasure serving the community!
Any Questions Answered?
DDPy - data.parliament.uk for Humans
Other resources