In my post about migrating from GitHub to Codeberg I was not able to find a suitable alternative for GitHub actions. With GitHub actions I built and published Docker images. Since the migration I was able find solution.
The solution is a Forgejo runner deployed to a private Kubernetes cluster. It is connected to Codeberg and runs my actions the same way as it did on GitHub. Let me walk you through the setup.
For the deployment of the Forgejo runner, I have created a Helm Chart: https://kubernetes.build/forgejoRunner/README.html . If you are familiar with Kubernetes and Helm, the deployment is straightforward.
The Helm Chart requires an instance token to register the runner. This token can be generated and copied from Codeberg. Open https://codeberg.org and click on your profile. Then navigate to Settings > Actions > Runners and click on Create new runner . Copy the registration token.
Once the runner is deployed, check if it shows up on Codeberg. Open Settings > Actions > Runners and you should see your registered runner.
The Chart also creates a new cluster role called
buildx
. This role shall be used to access the cluster and build the Docker image in Kubernetes enviroment. Export the kubeconfig of this role like this:
https://kubernetes.build/forgejoRunner/README.html#forgejo-buildx-action
The last step is the migration of the GitHub action and enabling the Codeberg repository to run actions.
Let’s get started by enabling actions on the repo. Open the Settings page of your repo and click on Units > Overview . Enable the Actions option and save the settings. A new tab Actions and a settings page are shown now.
We assume that you have a GitHub action
.github/workflows/build.yml
in your repo. Rename the
.github
folder to
.forgejo
.
A few modifications of the
build.yml
workflow are required to get the same results in the Forgejo runner environment.
First we need to define the build environment. Update the
build.yml
with this
container
key:
...
jobs:
build:
...
container: catthehacker/ubuntu:act-latest
The
ubuntu:act-latest
Docker image replicates the GitHub build environment.
Next we need to grant the Forgejo runner access to the Kubernetes environment.
- name: Checkout code
uses: actions/checkout@v4
- name: Create Kubeconfig for Buildx
run: |
mkdir -p $HOME/.kube
echo "${{ secrets.KUBECONFIG_BUILDX }}" > $HOME/.kube/config
As you can see the the kubeconfig is loaded from a environment secret. Setup this secret in your organisation or personal account. Open
Settings > Actions > Secrets
and click
Add secret
. Enter
KUBECONFIG_BUILDX
as name enter the content of the kubeconfig from the
Codeberg setup
step.
We are almost done. In order to build and publish a Docker image, these steps have to be added as well:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
driver: kubernetes
driver-opts: |
namespace=codeberg
- name: Login to Docker Registry
uses: docker/login-action@v3
with:
username: janikvonrotz
password: ${{ secrets.DOCKER_PAT }}
The
namespace=codeberg
definition must be match the namespace name of Forgejo runner deployed. Add the
DOCKER_PAT
as a secret to your account.
Checkout the full reference of the
build.yml
:
https://codeberg.org/janikvonrotz/janikvonrotz.ch/src/branch/main/.forgejo/workflows/build.yml
Everything is ready to run. Once you commit and push the new
.forgejo/workflows/build.yml
file, the following should happen:
The image is now ready to deploy. How to trigger a Kubernetes deployment from a Forgejo action will be covered in another post.
I give my thanks to Tobias Brunner . He gave me the initial configs for the setup:
Forgejo Runner Helm Chart: https://git.tbrnt.ch/tobru/gitops-zurrli/src/branch/main/apps/zurrli/forgejo-runner
Build and Deploy action: https://git.tbrnt.ch/tobru/tobrublog/src/branch/main/.forgejo/workflows/build-deploy.yaml
Categories: Continuous Integration
Wenn Hyperscaler zum Problem werden.
Für einen Anlass des Digital Cluster Uri durfte ich eine kurze Präsentation zum Digitale Souverenität halten. Die Präsentation fokussiert sich auf die sog. Hyperscaler und zeigt auf warum diese ein Problem sind.
Categories: Politics , Tech
The concept of abstraction has been applied to software engineering. But it never made sense. Software is flexible. Software can be changed even after it has been put into production.
The layer of abstraction in software is always moving. The definition of software abstraction has nothing in common with the definition of abstraction of engineering a physical product.
Software is fluid, always changing, often hard to reproduce, it is configurable, riddled with bugs and most of all - it has to be made sense of. As the code base changes you always have update your mental model of the how the software works.
We intend to write more code than less. We tend to make software more complex than simple. We add more and more features than deprecating them. Everything is done under the umbrella of increasing productivity.
However, developing software does not have to be more productive - it has to be contained.
Once you start engineering software with the help of AI you will experience a productivity gain. Not for the overall production of software, but for solving well defined problems in a well known domain. Problems such as bug fixing, commenting, documenting, boiler-plating and explaining are straightforward tasks for AIs.
As a developer you get hooked on running agents in your code base. Solving multiple problems at once. Trying new strategies like git work trees to run multiple agents at the same time.
The sense of productivity and squishing those bugs overwhelms you. This works until you hit the ceiling. The context window, the amount of token to burn through, the invoice, the dependency juggling, the agent getting lost, these are all hard limits that put a stop to the trip.
But what is really happening here? Lets take a step back and have look. Actually you are not improving the software product, you are bloating the code base! It gets bigger and bigger. There are not limitations for adding new features. The computing resources are limitless. Of course this makes sense, you are not getting paid to write smart code, you are getting paid to produce new features.
So this apparent level of abstraction and productivity gain is in reality just bloat. A waste of resources.
Producing software with Big Tech services has now a direct link to environmental damage. It is well documented that their data centers are not sustainable and are not using sustainable energy sources. They have become an environmental hazard.
In regards to the climate crisis you have a moral obligation to not use their data centers.
So what if we treat AI as what it is? An environmental hazard.
Read this part if you failed to contain the AI and a agent chain reactions has destroyed your code base.
In case your code is radiating and has been polluted by AI-written code, it is time to run the contamination protocols and safety procedures.
First, isolate the AI-written code. Containerize the application and sandbox the execution environment. Add warnings and comments that can be understood years after. Treat your software the same way as you would treat legacy systems.
Once contained, bury the code in the deepest
/dev/null
you can find. Never touch it again, but also never forget about it. Future generations of coders might be able to untangle and decompose the mess that has been created.
It is well known that GitHub dependabot alerts and PRs are less than helpful. For hubbers the dependabot is very similar to what clippy was to the office users. It tries to help, but is very distracting for solving the actual problem.
Disabling dependabot alerts for one repo is simple. Got to this page
https://github.com/$GITHUB_USERNAME/$REPO/settings/security_analysis
and click
disable
. But doing this for a 100 or 1000 repos is not feasible. We need a script to automate this process. Let me show you how.
In order to run the scripts you need to create a personal access token to access the GitHub API. Create a token with read/write access to user and repo here: https://github.com/settings/tokens
And then you are ready to configure and run the script. Simply change the
$GITHUB_USERNAME
and set the
$GITHUB_TOKEN
variables.
#!/bin/bash
GITHUB_USERNAME="janikvonrotz"
GITHUB_TOKEN="*******"
GITHUB_TOTAL_REPOS=$(curl -s -H "Authorization: token $GITHUB_TOKEN" "https://api.github.com/user" | jq '.public_repos + .total_private_repos')
GITHUB_PAGINATION=100
GITHUB_NEEDED_PAGES=$(( (GITHUB_TOTAL_REPOS + GITHUB_PAGINATION - 1) / GITHUB_PAGINATION ))
echo "Found $GITHUB_TOTAL_REPOS repos, processing over $GITHUB_NEEDED_PAGES page(s)..."
for (( PAGE=1; PAGE<=GITHUB_NEEDED_PAGES; PAGE++ )); do
REPOS=$(curl -s -H "Authorization: token $GITHUB_TOKEN" \
"https://api.github.com/user/repos?per_page=$GITHUB_PAGINATION&page=$PAGE&type=owner")
echo "$REPOS" | jq -r '.[] | select(.owner.login == "'"$GITHUB_USERNAME"'") | .full_name' | while read REPO_FULL; do
echo "Disabling dependabot for: $REPO_FULL"
# Disable dependabot vulnerability alerts
curl -s -X DELETE \
-H "Authorization: token $GITHUB_TOKEN" \
-H "Accept: application/vnd.github.v3+json" \
"https://api.github.com/repos/$REPO_FULL/vulnerability-alerts" \
-w " -> Status: %{http_code}\n" -o /dev/null
# Disable dependabot automated security fixes (PRs)
curl -s -X DELETE \
-H "Authorization: token $GITHUB_TOKEN" \
-H "Accept: application/vnd.github.v3+json" \
"https://api.github.com/repos/$REPO_FULL/automated-security-fixes" \
-w " -> Status: %{http_code}\n" -o /dev/null
done
done
The script will create list of repos connected to your account. Then it loops through the list disabling the alerts.
The script above only works for personal accounts. If you want to disable the alerts for all repos of an organisation, use this script:
#!/bin/bash
# Configuration
ORG_NAME="Mint-System"
GITHUB_TOKEN="*******"
GITHUB_TOTAL_REPOS=$(curl -s -H "Authorization: token $GITHUB_TOKEN" \
"https://api.github.com/orgs/$ORG_NAME" | jq -r '.public_repos + .total_private_repos')
GITHUB_PAGINATION=100
GITHUB_NEEDED_PAGES=$(( (GITHUB_TOTAL_REPOS + GITHUB_PAGINATION - 1) / GITHUB_PAGINATION ))
echo "Found $GITHUB_TOTAL_REPOS repos in org '$ORG_NAME', processing over $GITHUB_NEEDED_PAGES page(s)..."
# Loop through each page of organization repositories
for (( PAGE=1; PAGE<=GITHUB_NEEDED_PAGES; PAGE++ )); do
REPOS=$(curl -s -H "Authorization: token $GITHUB_TOKEN" \
"https://api.github.com/orgs/$ORG_NAME/repos?per_page=$GITHUB_PAGINATION&page=$PAGE&type=public")
echo "$REPOS" | jq -r '.[] | .full_name' | while read REPO_FULL; do
echo "Disabling dependabot for: $REPO_FULL"
# Disable dependabot vulnerability alerts
curl -s -X DELETE \
-H "Authorization: token $GITHUB_TOKEN" \
-H "Accept: application/vnd.github.v3+json" \
"https://api.github.com/repos/$REPO_FULL/vulnerability-alerts" \
-w " -> Status: %{http_code}\n" -o /dev/null
# Disable dependabot automated security fixes (PRs)
curl -s -X DELETE \
-H "Authorization: token $GITHUB_TOKEN" \
-H "Accept: application/vnd.github.v3+json" \
"https://api.github.com/repos/$REPO_FULL/automated-security-fixes" \
-w " -> Status: %{http_code}\n" -o /dev/null
done
done
You can use the same access token. Simply set the org name and your are good to go.
Categories: Software development
Since the enshittification of GitHub I decided to become a Berger instead of Hubber . Which I means that I wanted to move all my repos from github.com to codeberg.org.
Running a migration script is easy. But of course there are many details to consider once the repos have been moved. In this post I’ll brief you on my experience and give you details on these challenges:
As mentioned running a migration script that copies the repos from GitHub to Codeberg is easy.
The heavy work was done by https://github.com/LionyxML/migrate-github-to-codeberg . In order to run this script you need to create a token with read/write access to user, organisation and repo for Codeberg. Create the token here: https://codeberg.org/user/settings/applications Then you do the same for GitHub. Create a token with read/write access to user and repo here: https://github.com/settings/tokens
Clone the miggation script and update the variables. Run the script
./migrate_github_to_codeberg.sh
and you should get an output like this:
>>> Migrating: Bundesverfassung (public)...
Success!
>>> Migrating: Hack4SocialGood (public)...
Success!
>>> Migrating: Quarto (public)...
Success!
>>> Migrating: WebPrototype (public)...
Success!
>>> Migrating: Website (public)...
Success!
>>> Migrating: raspi-and-friends (public)...
Success!
One issue was that the script migrated all repos from all of connected organisations. I had to delete all repos in Codeberg. The following script helped doing so:
#!/bin/bash
CODEBERG_USERNAME="janikvonrotz"
CODEBERG_TOKEN="*******"
repos_response=$(curl -s -f -X GET \
https://codeberg.org/api/v1/users/$CODEBERG_USERNAME/repos \
-H "Authorization: token $CODEBERG_TOKEN")
if [ $? -eq 0 ]; then
repo_names=($(echo "$repos_response" | jq -r '.[] |.name'))
for repo_name in "${repo_names[@]}"; do
echo "Deleting repository $repo_name..."
delete_response=$(curl -s -f -w "%{http_code}" -X DELETE \
https://codeberg.org/api/v1/repos/$CODEBERG_USERNAME/$repo_name \
-H "Authorization: token $CODEBERG_TOKEN")
if [ $delete_response -eq 204 ]; then
echo "Repository $repo_name deleted successfully."
else
echo "Failed to delete repository $repo_name. Status code: $delete_response"
fi
done
else
echo "Failed to retrieve repository list. Status code: $?"
fi
I had to run the script multiple times because of the API paging.
To ensure the migration is done for repos that are assigned tomy account, I had to set owners variable in the script:
OWNERS=(
"janikvonrotz"
)
And with another run
./migrate_github_to_codeberg.sh
the script copied all repos from GitHub to Codeberg.
I use Vercel to build and publish my static website. If you use Netlify you probabley face the same problem. Vercel is tightly integrated with GitHub. At the time of writing this post there was no integration for Codeberg available. So it was either stick with GitHub or get rid of the integartion.
I decided to get rid of it and uninstall the Vercel app on GitHub. You can access your GitHub apps here: https://github.com/settings/installations
This will cause the Vercel projects to be disconnected from the GitHub project and thus they will no longer be deployed automatically.
To deploy the websites you can use the Vercel cli. It is simple as cake once you are looged in. Here is an example of such a deployment:
[main][~/taskfile.build]$ vercel --prod
Vercel CLI 44.7.3
🔍 Inspect: https://vercel.com/janik-vonrotz/taskfile-build/4zoKQnE7osV9udRnUyBYdX9EzNif [3s]
✅ Production: https://taskfile-build-5vtumwoja-janik-vonrotz.vercel.app [3s]
2025-08-20T11:20:17.987Z Running build in Washington, D.C., USA (East) – iad1
2025-08-20T11:20:17.988Z Build machine configuration: 2 cores, 8 GB
2025-08-20T11:20:18.006Z Retrieving list of deployment files...
2025-08-20T11:20:18.518Z Downloading 54 deployment files...
2025-08-20T11:20:19.233Z Restored build cache from previous deployment (BuULWL9zESMSfPa8QYN5gyoGA4JP)
2025-08-20T11:20:21.264Z Running "vercel build"
2025-08-20T11:20:21.727Z Vercel CLI 46.0.2
2025-08-20T11:20:22.364Z Detected `pnpm-lock.yaml` 9 which may be generated by [email protected] or [email protected]
2025-08-20T11:20:22.365Z Using [email protected] based on project creation date
2025-08-20T11:20:22.365Z To use [email protected], manually opt in using corepack (https://vercel.com/docs/deployments/configure-a-build#corepack)
2025-08-20T11:20:22.380Z Installing dependencies...
2025-08-20T11:20:23.109Z Lockfile is up to date, resolution step is skipped
2025-08-20T11:20:23.148Z Already up to date
2025-08-20T11:20:23.992Z
2025-08-20T11:20:24.001Z Done in 1.4s using pnpm v9.15.9
2025-08-20T11:20:26.202Z [11ty] Writing ./_site/index.html from ./README.md (liquid)
2025-08-20T11:20:26.208Z [11ty] Benchmark 73ms 19% 1× (Configuration) "@11ty/eleventy/html-transformer" Transform
2025-08-20T11:20:26.208Z [11ty] Copied 4 Wrote 1 file in 0.38 seconds (v3.0.0)
2025-08-20T11:20:26.303Z Build Completed in /vercel/output [4s]
2025-08-20T11:20:26.399Z Deploying outputs...
Of course it is possible to setup a CI job that installs the Vercel cli and runs the prod deployment.
For all the local git repos you need to update the remote. The local remote url will still point to github.com and needs to replaced with the coderberg.org url. The following script finds git repos in the home folder and upates the matching url:
#!/bin/bash
OLD_URL="[email protected]:janikvonrotz/"
NEW_URL="[email protected]:janikvonrotz/"
for REPO in $(find "$HOME" -maxdepth 2 -type d -name '.git'); do
DIR=$(dirname "$REPO")
cd "$DIR"
CURRENT_URL=$(git config --get remote.origin.url)
NEW_CURRENT_URL=$(echo "$CURRENT_URL" | sed "s|$OLD_URL|$NEW_URL|")
if [ "$NEW_CURRENT_URL" != "$CURRENT_URL" ]; then
git remote set-url origin "$NEW_CURRENT_URL"
echo "Updated origin URL for $(basename "$(pwd)") to: $NEW_CURRENT_URL"
fi
done
Submodule links in the
.gitmodules
have to be updated manually.
Not only the git remote links to github.com, but also the content stored in the repo. I often add a git clone command to the usage section in the
README.md
. The clone url has to be updated.
I was able to solve this issue with semi-automated approach. I created several search and replace commands that look for github.com link patterns. The search pattern considers external links to github.com that had to be preserved.
On the command line I entered the repo and ran the replacement commands:
# 1. Fix github.com/blob → codeberg.org/src/branch
rg 'github\.com(:|/)(janikvonrotz)/[^/]+/blob/(main|master)' -l | \
xargs sed -i 's|github\.com\(:\|/\)\(janikvonrotz\)/\([^/]\+\)/blob/\(main\|master\)\(/[^"]*\)\?|codeberg.org/\2/\3/src/branch/\4\5|g'
# 2. Fix github.com/tree → codeberg.org/src/branch
rg 'github\.com(:|/)(janikvonrotz)/[^/]+/tree/(main|master)' -l | \
xargs sed -i 's|github\.com\(:\|/\)\(janikvonrotz\)/\([^/]\+\)/tree/\(main\|master\)\(/[^"]*\)\?|codeberg.org/\2/\3/src/branch/\4\5|g'
# 3. Fix raw.githubusercontent.com → codeberg.org/raw/branch
rg 'raw\.githubusercontent\.com/(janikvonrotz)/[^/]+/(main|master)' -l | \
xargs sed -i 's|raw\.githubusercontent\.com/\(janikvonrotz\)/\([^/]\+\)/\(main\|master\)\(/[^"]*\)\?|codeberg.org/\1/\2/raw/branch/\3\4|g'
# 4. Fix bare repo URLs: github.com/user/repo → codeberg.org/user/repo
rg 'github\.com(:|/)janikvonrotz/[^/"?#]+' -l | \
xargs sed -i 's|github\.com\(:\|/\)\(janikvonrotz\)/\([^/"?#]\+\)|codeberg.org/\2/\3|g'
# 5. Fix user profile URLs
rg 'https://github\.com/janikvonrotz\b' -l | \
xargs sed -i 's|https://github\.com/janikvonrotz|https://codeberg.org/janikvonrotz|g'
rg 'https://github\.com/jankvonrotz\b' -l | \
xargs sed -i 's|https://github\.com/jankvonrotz|https://codeberg.org/jankvonrotz|g'
In some cases simply replacing a link was not possible. For example Vuepress linked by default to GitHub and I had to change the
.vuepress/config.js
manually:
repo: 'https://codeberg.org/janikvonrotz/$REPO',
repoLabel: 'Codeberg',
docsBranch: 'main',
Nonetheless replacing the links was easier than expected.
For my personal repos I didn’t run a a lot of GitHub Actions. One of the few was this action:
https://github.com/janikvonrotz/janikvonrotz.ch/blob/main/.github/workflows/build.yml
It builds and pushes a Docker image to Docker registry.
Codeberg offers two ways to run jobs.
There is the Woodpecker CI:
https://docs.codeberg.org/ci/#using-codeberg's-instance-of-woodpecker-ci
And there are Forgejo Actions:
https://docs.codeberg.org/ci/actions/#installing-forgejo-runner
I decided to use Forgejo Action. First I enabled Forgejo Actions in the repos settings. Next I created the
DOCKER_PAT
secret in the user settings:
https://codeberg.org/user/settings/actions/secrets
.
Forgejo Actions support the same YAML spec and thus I only need to rename the
.github
folder to
.forgejo
. I pushed the changes and the first run was created:
https://codeberg.org/janikvonrotz/janikvonrotz.ch/actions/runs/1
However the was waiting for the default Forgejo runner and it seemed not be meant for public use. So I decided to provide my own Forgejo runner.
I created a Helm chart to deploy a Forgejo runner:
https://kubernetes.build/forgejoRunner/README.html
Further I updated the
.forgejo/workflow/build.yml
to use the provided runner. The setup worked but it turned out that most of the CI dependencies are not in the YAML but on the runner. As I understand GitHub Action runners are actual virtual machines on Azure. Replicating these environments is not possible. Also building a multi-platform Docker image with Docker in Docker inside a Kubernetes cluster is not the best idea.
I decided to put this issue on hold. As an alternative I setup a mirror from the Codeberg repo to GitHub (see section below).
It is not possible to redirect repo visitors automatically from GithHub to Codeberg. I decided to update the repo description with a link to the new location. The following script walks through the GitHub repos and updates the description:
CODEBERG_URL="https://codeberg.org/janikvonrotz/"
GITHUB_USERNAME="janikvonrotz"
GITHUB_TOKEN="*******"
GITHUB_PAGINATION=100
github_total_repos=$(curl -s -H "Authorization: token $GITHUB_TOKEN" "https://api.github.com/user" | jq '.public_repos +.total_private_repos')
github_needed_pages=$(( ($github_total_repos + $GITHUB_PAGINATION - 1) / $GITHUB_PAGINATION ))
for ((github_page_counter = 1; github_page_counter <= github_needed_pages; github_page_counter++)); do
repos=$(curl -s -H "Authorization: token $GITHUB_TOKEN" "https://api.github.com/user/repos?per_page=${GITHUB_PAGINATION}&page=${github_page_counter}")
for repo in $(echo "$repos" | jq -r '.[] | select(.owner.login == "'"$GITHUB_USERNAME"'") |.name'); do
echo "Update repo description for $GITHUB_USERNAME/$repo:"
new_description="This repository has been moved to $CODEBERG_URL$repo. Please visit the new location for the latest updates."
curl -X PATCH \
-H "Authorization: token $GITHUB_TOKEN" \
-H "Content-Type: application/json" \
https://api.github.com/repos/$GITHUB_USERNAME/$repo \
-d "{\"description\":\"$new_description\"}"
done
done
Archiving a repo on GitHub means that it is no longer maintained there. Also the archived repo becomes readonly. With the following script I archived all my GitHub repos:
#!/bin/bash
ARCHIVE_MESSAGE="Repository migrated to Codeberg."
GITHUB_USERNAME="janikvonrotz"
GITHUB_TOKEN="*******"
GITHUB_PAGINATION=100
github_total_repos=$(curl -s -H "Authorization: token $GITHUB_TOKEN" "https://api.github.com/user" | jq '.public_repos +.total_private_repos')
github_needed_pages=$(( ($github_total_repos + $GITHUB_PAGINATION - 1) / $GITHUB_PAGINATION ))
for ((github_page_counter = 1; github_page_counter <= github_needed_pages; github_page_counter++)); do
repos=$(curl -s -H "Authorization: token $GITHUB_TOKEN" "https://api.github.com/user/repos?per_page=${GITHUB_PAGINATION}&page=${github_page_counter}")
for repo in $(echo "$repos" | jq -r '.[] | select(.owner.login == "'"$GITHUB_USERNAME"'") |.name'); do
echo "Archive repo $GITHUB_USERNAME/$repo:"
curl -X PATCH -H "Authorization: token $GITHUB_TOKEN" -H "Content-Type: application/json" https://api.github.com/repos/$GITHUB_USERNAME/$repo -d "{\"archived\":true,\"archive_message\":\"$ARCHIVE_MESSAGE\"}"
done
done
Mirroring a repo to GitHub solved the problem I had with running the GitHub Actions in the Codeberg environment. It is possible to mirror a Codeberg repo to GitHub and thus you can trigger GitHub Actions with the push of commit.
In the mirror settings of your repo, in my case it was https://codeberg.org/janikvonrotz/janikvonrotz.ch/settings , you can setup a push url. Enter the same credentials as used in the migration script and ensure to tick the push on commit box.
Not being able to run my own Forgejo runner was very frustrating. I think CI should not be that hard. I will try to setup a Forgejo runner on a bare metal vm and build my website image with it.
Overall moving my personal repos from GitHub to Codeberg was easy. I did not consider to move the repos for my organisation yet. I think this will be a much more difficult challenge. The organisation repos are integrated deeply into many other projects. The best approach I can think of is mirroring the repos from GitHub to Codeberg and start the transition with one repo and move a long the linked repos.
Categories: Software development
The goal of Passkeys is to replace passwords.
The idea is that instead of remembering a password and entering it to access your account, you own a device that generates a password for you.
Remembering is replaced with Owning.
In this post, I’ll give an example of such a device and how you can create and store a Passkey securely.
A device that can be used as a Passkey is the YubiKey . Setting it up is actually quite simple. In this example, we have an online account and are going to set up the Passkey. In the security settings of the online account, I click Add Passkey , and the browser prompts for an input:
The YubiKey is plugged in, I touch the YubiKey, and the device is registered. That’s it. From now on, when I log into my account, as a login option, I can choose Passkey. The browser prompts for the input, I touch the YubiKey, and get logged in.
However, at this point, you might ask: What happens when I lose the YubiKey? Can I make a backup of the key?
The short answer is: You cannot create a backup of a YubiKey.
So, the YubiKey might not be the best solution to use as a Passkey. Luckily, there are other providers and devices to manage a Passkey.
Here, I will show you how you can create and store a Passkey with KeePassXC .
You may need to restart your browser for the changes to take effect. When you register a new Passkey for your account, the KeePassXC extension will open the KeePassXC database locally and show this dialog:
You can click Register , and then you’ll find a Passkey entry in your KeePassXC database.
Logging into your account with a Passkey is simple. Select the Passkey option, and the KeePassXC extension will find a matching entry and prompt to authenticate.
Click Authenticate , and you should be logged in.
The KeePassXC database can be backed up, and while Passkey entries can technically be exported, it’s crucial to understand the security implications before doing so.
When registering a Passkey for your account, I recommend using both KeePassXC and YubiKey. The main reason is the mobile browser of your smartphone. Setting up the KeePassXC Passkey solution on your smartphone is currently not possible (as far as I know).
The YubiKey can be plugged into the smartphone’s USB-C port and used to authenticate with any mobile browser. Here is a simple webpage that shows Passkey device and browser compatibility: https://www.passkeys.io/compatible-devices
Categories: Security
I would describe myself as an AI critic. AI as a sales hype has not met any of my expectations. The current state of AI is very disappointing. If you feel the same way and cannot really point out why, this post might be of help.
So what are my expectations from AI? I discovered 4 essential points:
I hoped that this technology can help solve the problem of spam. Spam is still too prevalent today, one of the biggest problems on the internet. It costs billions in resources to fight spam and causes real harm to real people. My expectation was that AI can help detect spam absolutely accurately.
The internet is a great and fun place. But it gets very small if you are handicapped. For blind people, there are not images, only alt texts. I hoped that AI can be a great tool to add alt text to every image there is. Or why not generate videos of people doing sign language to caption another video in real-time? I am pretty sure this technology has the power to include even more people in this endeavor. However, in the current state, it makes the internet a more hostile place.
AI has the power to translate every text into almost any language. However, most often this is a subscription feature. Instead of translating all the texts there are, this powerful tool has become a paid feature to access the world.
When I am talking with a human, I want to be sure my chat partner is the real person. So why the heck does AI chat pretend to be human? I don’t see the point of chatting with a fake human. AI should act like what it is, a statistical model based on a lot of texts.
So, to make this post even more depressing, I’ll show you new problems caused by AI:
Integrating Large Language Models into Code Editors and calling them AI agents has already become a big business. The promise to software companies is that they get rid of the developers' wages and still do their thing.
In reality, code written, no matter by whom, requires maintenance. Code is infrastructure and contracts between real people. As the environments and requirements change, code needs to be updated, refactored, and fixed. It is constantly changing.
When AI generates more code than can be maintained, systems we trust will fail us.
The AI summaries on top of the Google search results cost Google a fortune. Visitors no longer click on ads. Google is cannibalizing their biggest business! Or are they?
What is more valuable than data about people? The data about interactions between people! And people interacting with an AI that pretends to be human give you exactly this. Google is harvesting interaction data to fuel their ad-tech empire.
Getting your data into LLMs is easy. Just publish something on the internet, and the AI crawler next door will feed on it. But have you thought of how data can get out of LLMs? It simply cannot. These LLMs are feeding on it until they explode.
And this fact makes every problem related to data privacy exponentially more difficult to solve.
I think this is the most important problem. AI chats pretend to be human. Humans will trust AI chats as if they are human. And this goes well beyond addictive behavior when scrolling through social media feeds. There are people who trust AI more than they can bear. They trust their most intimate problems, thinking they cannot be haunted by computers.
But time and time again, we saw that this kind of trust has failed us. Trusting AI chats will not be different.
Categories: Technology
My first and hopefully one of many short stories about humans, purpose and what if.
The face in the mirror looked tired. She hadn’t taken a break for too long and needed to get ready for the next surgery. Her colleagues were waiting for her in the operating room, one of the most modern rooms in the clinic. They specialized in difficult surgeries for high-society patients and individuals who had to maintain a low profile. That was the exciting part.
The next patient was delivered to the operating room. Andrew handed her the trauma report and gave a brief overview. The patient had suffered a skull fracture from a car accident, but internal bleeding had been stopped, and the patient was stable. However, the frontal lobe was fractured into multiple pieces and needed to be carefully separated from the surrounding brain tissue. It wasn’t a particularly complex procedure for her.
But who was the patient? The name had been censored and replaced with an anonymous number, suggesting that the individual might be a politician or a multi-billionaire. As she glanced into the observation room, she noticed several stern-looking men donning surgical attire. Clearly, they were part of the patient’s security. But she did not care and began to prepare for the operation.
The patient was put on the table and she immediately recognized the face. It was Felon Boar, the notorious billionaire known for his recklessness and destructiveness. It was his car that had taken the life of Brian. Her one and only love. A wave of pain, sorrow, and anger washed over her. Memories too dark remember. She struggled to contain her emotions, reminding herself to focus. “Carole, are you ready?” the anesthesiologist called out. “Yes, sorry,” she replied, trying to compose herself.
As the surgery began, her thoughts betrayed her. “Tissue!” she requested, while thinking “Doubt” to herself. “Pincer!” she asked, feeling a surge of Sadness. “Clamps!” she prompted, aware of the growing Frustration. The sound of the Drill seemed to whisper Anger in her ear. As she reached for the Gripper and felt an overwhelming sense of Hate. When she finally picked up the Scalpel, Fury had taken hold of her. Her mind went blank and she lost all sense.
She heard the voices of her colleagues – “Carole!”, “What the …?”, “What are you doing?” – but their words were drowned out. She stared, frozen, as the scalpel dug into the frontal lobe. The security personnel sprang into action, shouting, “Move aside from the patient!”.
Carole released her grip on the scalpel and walked mindlessly toward the exit.
Note from author: An LLM was used to check this story for typos and fix sentence structure.
Categories: Short Stories
Source of the image: https://www.spellingmistakescostlives.com/single-post/us-empire
In my last post, I explained how to map a keyboard key in Linux . In this post, I want to provide ideas on “How to survive the American techno-imperialism”.
For readers who have not heard about techno-imperialism before, let me give you a simple definition:
American techno-imperialism refers to the U.S. using its technological power to dominate global markets, influence politics, and shape cultures, often at the expense of other nations' autonomy.
The antagonist to techno-imperialism is digital sovereignty .
Digital sovereignty emphasizes the control and autonomy of nations over their own digital infrastructure, data, and technology policies.
Resisting external influence has become a critical concern for democracies. Democratic processes require objectivity and healthy media coverage of events. The influence of big tech in the realm of social media and beyond puts democratic processes at risk.
There is no autonomy without severance. Some connections and partnerships have to be cut in order to form new ones. As the US has become an unstable partner and decided to proof its power, having a strong alliance within Europe is essential.
The ideas of autonomy, independence, and sovereignty are thriving all over the world. For the good and the bad. The idea of sovereignty shares traits with nationalism. If we want autonomy, we have to make sure that we don’t get fascism.
So have to make a clear distinction between sovereignty and nationalism. Sovereignty is not about closing borders and improving at the cost of foreigners. It is about cooperation and participation.
The idea that we can hardly influence anything and are at the mercy of other forces is unjustified. A simple question is proof: How many people live in Europe, and how many live in the United States?
Europe is the cradle of democracy and its values. Let’s make sure that we are up to this responsibility. The EU has merit and therefore we should not be overcome by feat.
Techno-imperialism implies that data and computing is power. It is a power that shapes the world. Data moves without border. The companies in charge are US cloud and software providers.
We share the most valuable information with these provider. And with every transaction that goes abroad, we hurt our own software industry. It is time since long to consider European alternatives.
Many say (especially lobbyists) that there are no good alternatives to the established players. But this is simply not true: European Alternatives
There are plenty of European alternatives. And ff you expect them to deliver the same features and experience, you failed to understand the obvious. Big tech does not deserve any alternatives. The very existence of big tech is a problem. We can do better.
So, who are the bad guys? From an economic viewpoint, it is simple. Monopolists are bad for an economy and lead to an overall welfare loss.
Microsoft, Google, Apple, Facebook, Amazon and AirBnB (to name a few) are tech monopolists. They hurt the economy and the world. And don’t get me wrong here, there are also European monopolies such as Spotify or SAP that we should get rid of.
The success of these companies comes often at the had of externalizing costs. Nothing of value is lost if these companies don’t exist.
Categories: Politics
I am using a “Keychron K3 Pro” keyboard. Right of the space key there is a “Super” key to run the launcher (similar to windows key). Therefore the “Alt Right” key is missing and this makes creating umlauts more difficult. I am using the “Enlgish (int, with AltGr dead keys)” keyboard layout and pressing
Alt Right
+
Shift Right
+
"
gives me the
To identify the keys we need the scancode. You can run the following command to output the scancodes of the pressed keys:
sudo showkey --scancodes
The first code, in my case
0xe0
, is the super key.
I use
hwdb
(hardware database) to remap the key. Create a new config file.
sudo vi /etc/udev/hwdb.d/70-keyboard.hwdb
And enter this input:
evdev:input:*
KEYBOARD_KEY_$SCANCODE=rightalt
In my case it was:
evdev:input:*
KEYBOARD_KEY_0xe0=rightalt
Reload the hardware database and the key is mapped.
sudo systemd-hwdb update
sudo udevadm trigger
Edits:
I found a simpler way to write umlauts. As the
Alt Right
key acts as the compose key. The compose key can be re-assigned:
With this assignment I still don’t have a Alt Right key, but at least I can write ä,ö and ü.
Categories: Desktop