Skip to content

Support Actions concurrency syntax #32751

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

Zettat123
Copy link
Contributor

@Zettat123 Zettat123 commented Dec 7, 2024

Fix #24769
Fix #32662
Fix #33260

Depends on https://gitea.com/gitea/act/pulls/124

⚠️ BREAKING ⚠️

This PR removes the auto-cancellation feature added by #25716. Users need to manually add concurrency to workflows to control concurrent workflows or jobs.

@GiteaBot GiteaBot added the lgtm/need 2 This PR needs two approvals by maintainers to be considered for merging. label Dec 7, 2024
@github-actions github-actions bot added modifies/api This PR adds API routes or modifies them modifies/go Pull requests that update Go code modifies/migrations modifies/dependencies labels Dec 7, 2024
@Zettat123 Zettat123 force-pushed the support-actions-concurrency branch from 3551677 to fcf4517 Compare December 10, 2024 08:56
@lunny lunny added this to the 1.24.0 milestone Dec 16, 2024
@Zettat123 Zettat123 force-pushed the support-actions-concurrency branch from 52833e7 to 130f2a2 Compare December 17, 2024 01:49
@Zettat123 Zettat123 force-pushed the support-actions-concurrency branch 3 times, most recently from 461c7c1 to d5168a2 Compare January 6, 2025 06:16
@Zettat123 Zettat123 force-pushed the support-actions-concurrency branch from e038ed2 to f77f266 Compare January 10, 2025 06:00
@Zettat123 Zettat123 changed the title WIP: Support concurrency for Actions WIP: Support Actions concurrency syntax Jan 15, 2025
@Zettat123 Zettat123 force-pushed the support-actions-concurrency branch from ad71599 to 8f5948b Compare January 15, 2025 03:03
@Zettat123

This comment was marked as resolved.

wxiaoguang added a commit that referenced this pull request Jan 15, 2025
)

Move the main logic of `generateTaskContext` and `findTaskNeeds` to the
`services` layer.

This is a part of #32751, since we need the git context and `needs` to
parse the concurrency expressions.

---------

Co-authored-by: Lunny Xiao <[email protected]>
Co-authored-by: wxiaoguang <[email protected]>
@Zettat123 Zettat123 force-pushed the support-actions-concurrency branch from a1d070c to c433843 Compare February 8, 2025 02:09
@Zettat123 Zettat123 marked this pull request as ready for review February 8, 2025 02:29
@Zettat123 Zettat123 changed the title WIP: Support Actions concurrency syntax Support Actions concurrency syntax Feb 8, 2025
@Zettat123
Copy link
Contributor Author

This PR is ready for review. Please also review https://gitea.com/gitea/act/pulls/124 .

@lunny lunny added the pr/breaking Merging this PR means builds will break. Needs a description what exactly breaks, and how to fix it! label Feb 8, 2025
@Zettat123 Zettat123 force-pushed the support-actions-concurrency branch from c433843 to 7b27a42 Compare February 11, 2025 03:27
@Zettat123 Zettat123 force-pushed the support-actions-concurrency branch from 3263d4e to 85baf37 Compare February 11, 2025 07:03
chhe pushed a commit to chhe/act that referenced this pull request Feb 12, 2025
To support `concurrency` syntax for Gitea Actions

Gitea PR: go-gitea/gitea#32751

Reviewed-on: https://gitea.com/gitea/act/pulls/124
Reviewed-by: Lunny Xiao <[email protected]>
Co-authored-by: Zettat123 <[email protected]>
Co-committed-by: Zettat123 <[email protected]>
@lunny
Copy link
Member

lunny commented Feb 17, 2025

Please resolve the conflicts and since https://gitea.com/gitea/act/pulls/124 merged, this PR can be updated.

@Zettat123 Zettat123 force-pushed the support-actions-concurrency branch from 85baf37 to 9e21174 Compare February 18, 2025 02:50
@Zettat123
Copy link
Contributor Author

Please resolve the conflicts and since https://gitea.com/gitea/act/pulls/124 merged, this PR can be updated.

The conflicts were caused by model migrations instead of gitea/act. Now I've updated this PR and conflicts are resolved.

}

gitCtx := GenerateGiteaContext(run, nil)
jobResults := map[string]*jobparser.JobResult{"": {}}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the empty element is necessary?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to create an Interpreter to evaluate concurrency-related expressions. Interpreter is designed to evaluate job-level expressions, and always require a results map containing the job's result when creating an interpreter.

jobparser/interpeter.go#L12-L20

func NewInterpeter(
	jobID string,
	job *model.Job,
	matrix map[string]interface{},
	gitCtx *model.GithubContext,
	results map[string]*JobResult,
	vars map[string]string,
	inputs map[string]interface{},
) exprparser.Interpreter {

However, we need to evaluate workflow-level expressions here. Workflows have no results, so we can only use an empty item.

@lunny lunny modified the milestones: 1.24.0, 1.25.0 Apr 10, 2025
@Zettat123 Zettat123 force-pushed the support-actions-concurrency branch from 9e21174 to 865d0c3 Compare April 16, 2025 17:32
@Zettat123 Zettat123 force-pushed the support-actions-concurrency branch 2 times, most recently from 2a03a8c to 366d438 Compare April 18, 2025 20:12
lyz-code added a commit to lyz-code/blue-book that referenced this pull request Apr 29, 2025
…in technology

**Art**
- [Decolonizing technology](https://ail.angewandte.at/explore/decolonizing-technology/)

**Articles**
- [Shanzhai: An Opportunity to Decolonize Technology? by Sherry Liao](https://jipel.law.nyu.edu/shanzhai-an-opportunity-to-decolonize-technology/)
- [Técnicas autoritarias y técnicas democráticas by Lewis Mumford](https://alasbarricadas.org/forums/viewtopic.php?t=9654) ([pdf](https://istas.net/descargas/escorial04/material/dc05.pdf))

**Books**

- [Race after technology by Ruha Benjamin](https://www.ruhabenjamin.com/race-after-technology)
- [The Palestine Laboratory: How Israel Exports the Technology of Occupation Around the World by Antony Loewenstein](https://www.goodreads.com/book/show/62790909-the-palestine-laboratory)
- [Frantz Fanon books](https://en.wikipedia.org/wiki/Frantz_Fanon)
- [Hacking del sé by Ippolita](https://www.agenziax.it/hacking-del-se)
- [Tecnologie conviviali by Carlo Milani](https://www.eleuthera.it/materiale.php?op=2699)
- [Descolonizar y despatriarcalizar las tecnologías by Paola Ricaurte Quijano](https://vision.centroculturadigital.mx/media/done/descolonizarYD.pdf)

**Research**

- [Decolonization, Technology, and Online Research by Janet Salmons](https://researchmethodscommunity.sagepub.com/blog/decolonization-indigenous-methods-technology)

**Talks**

- [re:publica 2024: Christoph Hassler - Decolonize Tech](https://re-publica.com/de/session/decolonize-tech-how-design-all) ([video](https://www.youtube.com/watch?v=R10Dwgxt_mg))

feat(birding): Introduce android apps for birding

- whoBIRD
- Merlin Bird ID: I've seen it working and it's amazing, I'm however trying first whoBIRD as it's in F-droid

feat(book_management#Convert pdf to epub): Convert images based pdf to epub

NOTE: before proceeding inspect the next tools that use AI so it will probably give a better output:

- [MinerU](https://github.com/opendatalab/MinerU)
- [marker](https://github.com/VikParuchuri/marker)
- [docling](https://github.com/docling-project/docling)
- [olmocr](https://olmocr.allenai.org/)

If the pdf is based on images

Then you need to use OCR to extract the text.

First, convert the PDF to images:

```bash
pdftoppm -png input.pdf page
```

Apply OCR to your PDF

Use `tesseract` to extract text from each image:

```bash
for img in page-*.png; do
    tesseract "$img" "${img%.png}" -l eng
done
```

This produces `page-1.txt`, `page-2.txt`, etc.

feat(calistenia): Introduce calistenia

**Técnica básica**

**Dominadas**

- [Video tutorial de jéssica martín](https://www.youtube.com/watch?v=3nSaIugxv7Y)

**Referencias**

**Vídeos**

+- [Jéssica Martín Moreno](https://www.youtube.com/@jessmartinm)

feat(gitpython#Checking out an existing branch): Checking out an existing branch

```python
heads = repo.heads
develop = heads.develop
repo.head.reference = develop
```

feat(python_snippets#Download book previews from google books): Download book previews from google books

You will only get some of the pages but it can help in the ending pdf

This first script gets the images data:

```python
import asyncio
import os
import json
import re
from urllib.parse import urlparse, parse_qs
from playwright.async_api import async_playwright
import aiohttp
import aiofiles

async def download_image(session, src, output_path):
    """Download image from URL and save to specified path"""
    try:
       headers = {
            "User-Agent": "Mozilla/5.0 (Windows NT 10.0; rv:128.0) Gecko/20100101 Firefox/128.0",
            "Accept": "*/*",
            "Accept-Language": "en-US,en;q=0.5",
            "Referer": "https://books.google.es/",
            "DNT": "1",
            "Sec-GPC": "1",
            "Connection": "keep-alive",
        }

        async with session.get(src, headers=headers) as response:
            response.raise_for_status()
            async with aiofiles.open(output_path, "wb") as f:
                await f.write(await response.read())

        print(f"Downloaded: {output_path}")
        return True
    except Exception as e:
        print(f"Error downloading {src}: {e}")
        return False

def extract_page_number(pid):
    """Extract numeric page number from page ID"""
    match = re.search(r"PA(\d+)", pid)
    if match:
        return int(match.group(1))
    try:
        return int(pid.replace("PA", "").replace("PP", ""))
    except:
        return 9999

async def main():
    # Create output directory
    output_dir = "book_images"
    os.makedirs(output_dir, exist_ok=True)

    # Keep track of all pages found
    seen_pids = set()
    page_counter = 0
    download_tasks = []

    # Create HTTP session for downloads
    async with aiohttp.ClientSession() as session:
        async with async_playwright() as p:
            browser = await p.firefox.launch(headless=False)
            context = await browser.new_context(
                user_agent="Mozilla/5.0 (Windows NT 10.0; rv:128.0) Gecko/20100101 Firefox/128.0"
            )

            # Create a page and set up response handling
            page = await context.new_page()

            # Store seen URLs to avoid duplicates
            seen_urls = set()

            # Set up response handling for JSON data
            async def handle_response(response):
                nonlocal page_counter
                url = response.url

                # Only process URLs with jscmd=click3
                if "jscmd=click3" in url and url not in seen_urls:
                    try:
                        # Try to parse as JSON
                        json_data = await response.json()
                        seen_urls.add(url)

                       # Process and download page data immediately
                        if "page" in json_data and isinstance(json_data["page"], list):
                            for page_data in json_data["page"]:
                                if "src" in page_data and "pid" in page_data:
                                    pid = page_data["pid"]
                                    if pid not in seen_pids:
                                        seen_pids.add(pid)
                                       src = page_data["src"]

                                        # Create filename with sequential numbering
                                        formatted_index = (
                                            f"{int(pid.replace('PA', '')):03d}"
                                        )
                                        output_file = os.path.join(
                                            output_dir, f"page-{formatted_index}.png"
                                        )
                                        page_counter += 1

                                        print(
                                            f"Found new page: {pid}, scheduling download"
                                        )

                                        # Start download immediately
                                        task = asyncio.create_task(
                                            download_image(session, src, output_file)
                                        )
                                        download_tasks.append(task)

                        return len(seen_pids)
                   except Exception as e:
                        print(f"Error processing response from {url}: {e}")

            # Register response handler
            page.on("response", handle_response)

            # Navigate to the starting URL
            book_url = (
                "https://books.google.es/books?id=412loEMJA9sC&lpg=PP1&hl=es&pg=PA5"
            )
            await page.goto(book_url)

            # Wait for initial page load
            await page.wait_for_load_state("networkidle")

            # Scroll loop variables
            max_scroll_attempts = 500  # Safety limit
            scroll_count = 0
            pages_before_scroll = 0
            consecutive_no_new_pages = 0

            # Continue scrolling until we find no new pages for several consecutive attempts
            while scroll_count < max_scroll_attempts and consecutive_no_new_pages < 5:
                # Get current page count before scrolling
               pages_before_scroll = len(seen_pids)

                # Use PageDown key to scroll
                await page.keyboard.press("PageDown")
                scroll_count += 1

                # Wait for network activity
                await asyncio.sleep(2)

                # Check if we found new pages after scrolling
                if len(seen_pids) > pages_before_scroll:
                    consecutive_no_new_pages = 0
                    print(
                        f"Scroll {scroll_count}: Found {len(seen_pids) - pages_before_scroll} new pages"
                    )
                else:
                    consecutive_no_new_pages += 1
                    print(
                       f"Scroll {scroll_count}: No new pages found ({consecutive_no_new_pages}/5)"
                    )

            print(f"Scrolling complete. Found {len(seen_pids)} pages total.")
            await browser.close()

        # Wait for any remaining downloads to complete
        if download_tasks:
            print(f"Waiting for {len(download_tasks)} downloads to complete...")
            await asyncio.gather(*download_tasks)

        print(f"Download complete! Downloaded {page_counter} images.")

if __name__ == "__main__":
    asyncio.run(main())
```

feat(python_snippets#Send keystrokes to an active window): Send keystrokes to an active window

```python
import subprocess

subprocess.run(['xdotool', 'type', 'Hello world!'])
subprocess.run(['xdotool', 'key', 'Return']) # press enter

subprocess.run(['xdotool', 'key', 'ctrl+c'])

window_id = subprocess.check_output(['xdotool', 'getactivewindow']).decode().strip()
subprocess.run(['xdotool', 'windowactivate', window_id])
```

feat(python_snippets#Make temporal file): Make temporal file

```python
import tempfile
with tempfile.NamedTemporaryFile(
    suffix=".tmp", mode="w+", encoding="utf-8"
) as temp:
    temp.write(
        "# Enter commit message body. Lines starting with '#' will be ignored.\n"
    )
    temp.write("# Leave file empty to skip the body.\n")
    temp.flush()

    subprocess.call([editor, temp.name])

    temp.seek(0)
    lines = temp.readlines()
```

feat(conflicto): Añadir notas sobre el conflicto desde un punto de vista antipunitivista

**Pensamientos sueltos sobre la visión del conflicto desde un punto de vista antipunitivista**

- Dejar de ver los conflictos como una batalla, es una oportunidad de transformación
- Los conflictos deben de ser resueltos en colectivo siempre que se pueda
- Si se veta a un pavo por comportamientos machistas sólo estás trasladando el problema. Seguirá pululando por diferentes colectivos hasta que arraigue en uno más débil y lo torpedeará
- Es difícil de ver el límite entre lo terapéutico y lo transformativo
- Cuál es la responsabilidad colectiva de la transformación de una persona?
- Nos faltan herramientas para:
  - la gestión de conflictos en general
  - la gestión de conflictos físicos en particular
  - el acompañamiento a ambas partes de un conflicto
  - el acompañamiento a una persona agresora
- es todo lo que me produce malestar violencia?
- Cada situación es tan particular que los protocolos no sirven. Es mucho mejor someternos en colectivo y a menudo a situaciones de conflicto y generar desde esa práctica las herramientas que nos puedan servir, de manera que en el momento de la verdad salgan de manera intuitiva.

**Referencias**

**Películas**

- [Ellas hablan](<https://en.wikipedia.org/wiki/Women_Talking_(film)>) ([trailer](https://www.youtube.com/watch?v=pD0mFhMqDCE))
- [How to Have Sex](https://en.wikipedia.org/wiki/How_to_Have_Sex) ([trailer](https://www.youtube.com/watch?v=52b7s-diPk8))
- [Promising young woman](https://es.wikipedia.org/wiki/Promising_Young_Woman) ([trailer](https://www.youtube.com/watch?v=7i5kiFDunk8))

**Libros**

- [Micropolítica de los grupos](https://traficantes.net/sites/default/files/pdfs/Micropol%C3%ADticas%20de%20los%20grupos-TdS.pdf)
- [Conflicto no es lo mismo que abuso](https://hamacaonline.net/media/publicacio/Conflicto-no-es-lo-mismo_dig2024.pdf)
- [Ofendiditos](https://maryread.es/producto/ofendiditos)

**Series**

- [La Fièvre](https://dai.ly/x91qoeq)

**Podcast**

- [El marido (Ciberlocutorio)](https://www.primaverasound.com/es/radio/shows/ciberlocutorio/ciberlocutorio-el-marido)
- [El cancelado (Ciberlocutorio)](https://www.primaverasound.com/es/radio/shows/ciberlocutorio/ciberlocutorio-el-cancelado)
- [Antipunitivismo con Laura Macaya (Sabor a Queer)](https://www.youtube.com/watch?v=p_frBHfk7cc)
- [Procesos restaurativos, feministas y sistémicos (Fil a l´agulla en el curso de Nociones Comunes "Me cuidan mis amigas")](https://soundcloud.com/traficantesdesue-os/s5-procesos-restaurativos-feministas-y-sistemicos-con-anna-gali-fil-a-lagulla?in=traficantesdesue-os/sets/curso-me-cuidan-mis-amigas)

**Artículos**

- [Con penas y sin glorias: reflexiones desde un feminismo antipunitivo y comunitario:](https://ctxt.es/es/20220401/Firmas/39365/feminismo-autoorganizacion-barrio-antipunitivismo-comunidades-violencia-machista.htm)
- [Expulsar a los agresores no reduce necesariamente la violencia:](https://ctxt.es/es/20250301/Firmas/48798/Colectivo-Cantoneras-expulsar-agresores-violencia-de-genero-feminismos-justicia-restaurativa-antipunitivismo.htm)
- [Antipunitivismo remasterizado](https://www.pikaramagazine.com/2024/10/antipunitivismo-remasterizado/)
- [Reflexiones sobre antipunitivismo en tiempos de violencias](https://www.pikaramagazine.com/2021/12/reflexiones-sobre-antipunitivismo-en-tiempos-de-violencias/)
- [Indispuestas. Cuando nadie quiere poner la vida en ello](https://www.elsaltodiario.com/palabras-en-movimiento/indispuestas-cuando-nadie-quiere-poner-vida-ello)
- [La deriva neoliberal de los cuidados](https://lavillana.org/la-deriva-neoliberal-de-los-cuidados/)
- [Justicia transformativa: del dicho al hecho](https://zonaestrategia.net/justicia-transformativa-del-dicho-al-hecho/)
- [Las malas víctimas responden](https://www.elsaltodiario.com/la-antinorma/malas-victimas-responden)

**Otras herramientas**

- [Guía para la prevención y actuación frente a las violencias patriarcales de la Cinètika.](https://lacinetika.wordpress.com/comissio-de-genere/)
- [Guía Fil a l´agulla per a la gestió de conflictes a les cooperatives (en catalan)](https://www.cooperativestreball.coop/sites/default/files/materials/guia_per_a_la_gestio_de_conflictes.pdf)

feat(docker#Limit the access of a docker on a server to the access on the docker of another server): Limit the access of a docker on a server to the access on the docker of another server

WARNING: I had issues with this path and I ended up not using docker swarm networks.

If you want to restrict access to a docker (running on server 1) so that only another specific docker container running on another server (server 2) can access it. You need more than just IP-based filtering between hosts. The solution is then to:

1. Create a Docker network that spans both hosts using Docker Swarm or a custom overlay network.

2. **Use Docker's built-in DNS resolution** to allow specific container-to-container communication.

Here's a step-by-step approach:

**1. Set up Docker Swarm (if not already done)**

On server 1:

```bash
docker swarm init --advertise-addr <ip of server 1>
```

This will output a command to join the swarm. Run that command on server 2.

**2. Create an overlay network**

```bash
docker network create --driver overlay --attachable <name of the network>
```

**3. Update the docker compose on server 1**

magine for example that we want to deploy [wg-easy](wg-easy.md).

```yaml
services:
  wg-easy:
    image: ghcr.io/wg-easy/wg-easy:latest
    container_name: wg-easy
    networks:
      - wg
      - <name of the network> # Add the overlay network
    volumes:
      - wireguard:/etc/wireguard
      - /lib/modules:/lib/modules:ro
    ports:
      - "51820:51820/udp"
      # - "127.0.0.1:51821:51821/tcp" # Don't expose the http interface, it will be accessed from within the docker network
    restart: unless-stopped
    cap_add:
      - NET_ADMIN
      - SYS_MODULE
    sysctls:
      - net.ipv4.ip_forward=1
      - net.ipv4.conf.all.src_valid_mark=1
      - net.ipv6.conf.all.disable_ipv6=1

networks:
  wg:
    # Your existing network config
  <name of the network>:
    external: true # Reference the overlay network created above
```

**4. On server 2, create a Docker Compose file for your client container**

```yaml
services:
  wg-client:
    image: your-client-image
    container_name: wg-client
    networks:
      - <name of the network>
    # Other configuration for your client container

networks:
  <name of the network>:
    external: true # Reference the same overlay network
```

**5. Access the WireGuard interface from the client container**

Now, from within the client container on server 2, you can access the WireGuard interface using the container name:

```
http://wg-easy:51821
```

This approach ensures that:

1. The WireGuard web interface is not exposed to the public (not even localhost on server 1)
2. Only containers on the shared overlay network can access it
3. The specific container on server 2 can access it using Docker's internal DNS

**Testing the network is well set**

You may be confused if the new network is not shown on server 2 when running `docker network ls` but that's normal. Server 2 is a swarm worker node. The issue with not seeing the overlay network on server 2 is actually expected behavior - worker nodes cannot list or manage networks directly. However, even though you can't see them, containers on the server 2 can still connect to the overlay network when properly configured.

To see that the swarm is well set you can use `docker node ls` on server 1 (you'll see an error on server 2 as it's a worker node)

**Weird network issues with swarm overlays**

I've seen cases where after a server reboot you need to remove the overlay network from the docker compose and then add it again.

After many hours of debugging I came with the patch of removing the overlay network from the docker-compose and attaching it with the systemd service

```
[Unit]
Description=wg-easy
Requires=docker.service
After=docker.service

[Service]
Restart=always
User=root
Group=docker
WorkingDirectory=/data/apps/wg-easy
TimeoutStartSec=100
RestartSec=2s
ExecStart=/usr/bin/docker compose -f docker-compose.yaml up

ExecStartPost=/bin/bash -c '\
    sleep 30; \
    /usr/bin/docker network connect wg-easy wg-easy; \
'
ExecStop=/usr/bin/docker compose -f docker-compose.yaml down

[Install]
WantedBy=multi-user.target
```

fix(dunst): Tweak installation steps

```bash
sudo apt install libdbus-1-dev libx11-dev libxinerama-dev libxrandr-dev libxss-dev libglib2.0-dev \
    libpango1.0-dev libgtk-3-dev libxdg-basedir-dev libgdk-pixbuf-2.0-dev

make WAYLAND=0
sudo make WAYLAND=0 install
```
If it didn't create the systemd service you can [create it yourself](linux_snippets.md#create-a-systemd-service-for-a-non-root-user) with this service file

```ini
[Unit]
Description=Dunst notification daemon
Documentation=man:dunst(1)
PartOf=graphical-session.target

[Service]
Type=dbus
BusName=org.freedesktop.Notifications
ExecStart=/usr/local/bin/dunst
Slice=session.slice
Environment=PATH=%h/.local/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games

[Install]
WantedBy=default.target
```

You may need to add more paths to PATH.

To see the logs of the service use `journalctl --user -u dunst.service -f --since "15 minutes ago"`

feat(dunst#Configuration): Configuration

Read and tweak the `~/.dunst/dunstrc` file to your liking. You have the [default one here](https://github.com/dunst-project/dunst/blob/master/dunstrc)

You'll also need to configure the actions in your window manager. In my case [i3wm](i3wm.md):

```
bindsym $mod+b exec dunstctl close-all
bindsym $mod+v exec dunstctl context
```

feat(dunst#Configure each application notification): Configure each application notification

You can look at [rosoau config](https://gist.github.com/rosoau/fdfa7b3e37e3c5c67b7dad1b7257236e) for inspiration

**References**
* [Some dunst configs](https://github.com/dunst-project/dunst/issues/826)
* Smarttech101 tutorials ([1](https://smarttech101.com/how-to-configure-dunst-notifications-in-linux-with-images), [2](https://smarttech101.com/how-to-send-notifications-in-linux-using-dunstify-notify-send#Taking_Actions_on_notifications_using_dunstifynotify-send))
* [Archwiki page on dunst](https://wiki.archlinux.org/title/Dunst)

feat(feminism#References): New References

feat(gitea#Configure triggers not to push to a branch): Configure triggers not to push to a branch

There is now a branches-ignore option:

```yaml
on:
  push:
    branches-ignore:
      - main
```

feat(gitea#Not there yet): Not there yet

- [Being able to run two jobs on the same branch](https://github.com/go-gitea/gitea/issues/32662): It will be implemented with [concurrency](https://github.com/go-gitea/gitea/issues/24769) with [this pr](https://github.com/go-gitea/gitea/pull/32751). This behavior [didn't happen before 2023-07-25](https://github.com/go-gitea/gitea/pull/25716)

feat(i3wm): Add i3wm python actions

You can also use it [with async](https://i3ipc-python.readthedocs.io/en/latest/)

**Create the connection object**

```python
from i3ipc import Connection, Event
i3 = Connection()
```

**Focus on a window by it's class**

```python
tree = i3.get_tree()
ff = tree.find_classed('Firefox')[0]
ff.command('focus')
```

feat(elasticsearch#Delete documents from all indices in an elasticsearch cluster): Delete documents from all indices in an elasticsearch cluster

```bash

ES_HOST="${1:-http://localhost:9200}"
DEFAULT_SETTING="5"              # Target default value (5%)

INDICES=$(curl -s -XGET "$ES_HOST/_cat/indices?h=index")

for INDEX in $INDICES; do
  echo "Processing index: $INDEX"

  # Close the index to modify static settings
  curl -s -XPOST "$ES_HOST/$INDEX/_close" > /dev/null

 # Update expunge_deletes_allowed to 1%
  curl -s -XPUT "$ES_HOST/$INDEX/_settings" -H 'Content-Type: application/json' -d'
{
    "index.merge.policy.expunge_deletes_allowed": "0"
  }' > /dev/null

  # Reopen the index
  curl -s -XPOST "$ES_HOST/$INDEX/_open" > /dev/null

  # Trigger forcemerge (async)
  # curl -s -XPOST "$ES_HOST/$INDEX/_forcemerge?only_expunge_deletes=true&wait_for_completion=false" > /dev/null
  echo "Forcemerge triggered for $INDEX"
  curl -s -XPOST "$ES_HOST/$INDEX/_forcemerge?only_expunge_deletes=true" > /dev/null &
  echo "Waiting until all forcemerge tasks are done"
  while curl -s $ES_HOST/_cat/tasks\?v  | grep forcemerge > /dev/null ; do
    curl -s $ES_HOST/_cat/indices | grep $INDEX
    sleep 10
  done

  # Close the index again
  curl -s -XPOST "$ES_HOST/$INDEX/_close" > /dev/null

  # Update to the new default (5%)
  curl -s -XPUT "$ES_HOST/$INDEX/_settings" -H 'Content-Type: application/json' -d'
  {
    "index.merge.policy.expunge_deletes_allowed": "'"$DEFAULT_SETTING"'"
  }' > /dev/null

  # Reopen the index
  curl -s -XPOST "$ES_HOST/$INDEX/_open" > /dev/null
done

echo "Done! All indices updated."
```

feat(wireguard#Failed to resolve interface "tun": No such device): Troubleshoot Failed to resolve interface "tun": No such device

```bash
sudo apt purge resolvconf
```

feat(zfs#List all datasets that have zfs native encryption ): List all datasets that have zfs native encryption

```bash
ROOT_FS="main"
is_encryption_enabled() {
    zfs get -H -o value encryption $1 | grep -q 'aes'
}

list_datasets_with_encryption() {

    # Initialize an array to hold dataset names
    datasets=()

    # List and iterate over all datasets starting from the root filesystem
    for dataset in $(zfs list -H -o name | grep -E '^'$ROOT_FS'/'); do
        if is_encryption_enabled "$dataset"; then
            datasets+=("$dataset")
        fi
    done

    # Output the results
    echo "ZFS datasets with encryption enabled:"
    printf '%s\n' "${datasets[@]}"
}

list_datasets_with_encryption

feat(zfs#cannot destroy dataset: dataset is busy): Troubleshoot cannot destroy dataset: dataset is busy

If you're experiencing this error and can reproduce the next traces:

```bash
cannot destroy 'zroot/2013-10-15T065955229209': dataset is busy

cannot unmount 'zroot/2013-10-15T065955229209': not currently mounted

zroot/2013-10-15T065955229209                2.86G  25.0G  11.0G  /var/lib/heaver/instances/2013-10-15T065955229209

umount: /var/lib/heaver/instances/2013-10-15T065955229209: not mounted
```

You can `grep zroot/2013-10-15T065955229209 /proc/*/mounts` to see which process is still using the dataset.

Another possible culprit are snapshots, you can then run:

```bash
zfs holds $snapshotname
```

To see if it has any holds, and if so, `zfs release` to remove the hold.

feat(zfs#Upgrading ZFS Storage Pools): Upgrading ZFS Storage Pools

If you have ZFS storage pools from a previous zfs release you can upgrade your pools with the `zpool upgrade` command to take advantage of the pool features in the current release. In addition, the zpool status command has been modified to notify you when your pools are running older versions. For example:

```bash
zpool status

  pool: tank
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on older software versions.
 scrub: none requested
config:
        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c1t0d0  ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0
errors: No known data errors
```

You can use the following syntax to identify additional information about a particular version and supported releases:

```bash
zpool upgrade -v

This system is currently running ZFS pool version 22.

The following versions are supported:

VER  DESCRIPTION
---  --------------------------------------------------------
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history
 5   Compression using the gzip algorithm
 6   bootfs pool property
 7   Separate intent log devices
 8   Delegated administration
 9   refquota and refreservation properties
 10  Cache devices
 11  Improved scrub performance
 12  Snapshot properties
 13  snapused property
 14  passthrough-x aclinherit
 15  user/group space accounting
 16  stmf property support
 17  Triple-parity RAID-Z
 18  Snapshot user holds
 19  Log device removal
 20  Compression using zle (zero-length encoding)
 21  Reserved
 22  Received properties

For more information on a particular version, including supported releases,
see the ZFS Administration Guide.
```

Then, you can run the zpool upgrade command to upgrade all of your pools. For example:

```bash
zpool upgrade -a
```

feat(linux_snippets#Use lftp): Use lftp

Connect with:

```bash
lftp -p <port> user@host
```

Navigate with `ls` and `cd`. Get with `mget` for multiple things

feat(linux_snippets#Difference between apt-get upgrate and apt-get full-upgrade): Difference between apt-get upgrate and apt-get full-upgrade

The difference between `upgrade` and `full-upgrade` is that the later will remove the installed packages if that is needed to upgrade the whole system. Be extra careful when using this command

I will more frequently use `autoremove` to remove old packages and then just use `upgrade`.

feat(linux_snippets#Upgrade debian): Upgrade debian

```bash
sudo apt-get update
sudo apt-get upgrade
sudo apt-get full-upgrade

sudo vi /etc/apt/sources.list /etc/apt/sources.list.d/*

sudo apt-get clean
sudo apt-get update

sudo apt-get upgrade
sudo apt-get full-upgrade

sudo apt-get autoremove

sudo shutdown -r now
```

feat(linux_snippets#Get a list of extensions by file type): Get a list of extensions by file type

There are community made lists such as [dyne's file extension list](https://github.com/dyne/file-extension-list/)

fix(linux_snippets#Upgrade ubuntu): Upgrade ubuntu

Upgrade your system:

```bash
sudo apt update
sudo apt upgrade
reboot
```

You must install ubuntu-release-upgrader-core package:

```bash
sudo apt install ubuntu-release-upgrader-core
```

Ensure the Prompt line in `/etc/update-manager/release-upgrades` is set to ‘lts‘ using the “grep” or “cat”

```bash
grep 'lts' /etc/update-manager/release-upgrades
cat /etc/update-manager/release-upgrades
```

Opening up TCP port 1022

For those using ssh-based sessions, open an additional SSH port using the ufw command, starting at port 1022. This is the default port set by the upgrade procedure as a fallback if the default SSH port dies during upgrades.

```bash
sudo /sbin/iptables -I INPUT -p tcp --dport 1022 -j ACCEPT

```

Finally, start the upgrade from Ubuntu 22.04 to 24.04 LTS version. Type:

```bash
sudo do-release-upgrade -d
```

feat(moonlight): Add note on apollo

Also checkout [apollo](https://github.com/ClassicOldSong/Apollo) a sunshine fork.

feat(ocr): Add ocr references

- [What is the best LLM based OCR open source available now? ](https://www.reddit.com/r/LocalLLaMA/comments/1javx8d/what_is_the_best_llm_based_ocr_open_source/)

feat(ombi#Protect ombi behind authentik): Protect ombi behind authentik

This option allows the user to select a HTTP header value that contains the desired login username.

> Note that if the header value is present and matches an existing user, default authentication is bypassed - use with caution.

This is most commonly utilized when Ombi is behind a reverse proxy which handles authentication. For example, if using Authentik, the X-authentik-username HTTP header which contains the logged in user's username is set by Authentik's proxy outpost.

feat(palestine#Al Jazeera documenaries): Add Al Jazeera documenaries

A la luz de los acontecimientos actuales en Palestina, un gran número de cineastas han puesto sus películas sobre Palestina a disposición en línea de forma gratuita.

Están en árabe y no tienen subtítulos así que no se puede bajar, pero si se pueden ver directamente en youtube con los subtítulos autogenerados.

- Una colección de documentales publicada por Al Jazeera Documentary: [1](https://bit.ly/3yp2nBI), [2](https://bit.ly/2SSpMeC), [3](https://bit.ly/3f0KK3P)
- [El documental "Guardián de la memoria"](https://youtu.be/eywuYeflWzg)
- [El documental "Un asiento vacío"](https://youtu.be/an4hRFWOSQQ)
- [El documental "El piloto de la resistencia"](https://youtu.be/wqSmdZy-Xcg)
- [El documental "Jenin"](https://vimeo.com/499672067)
- [El documental "El olivo"](https://vimeo.com/432062498)
- [El documental "Escenas de la ocupación en Gaza 1973"](https://youtu.be/1JlIwmnYnlE)
- [El documental "Gaza lucha por la libertad"](https://youtu.be/HnZSaKYmP2s)
- [El documental "Los hijos de Arna"](https://youtu.be/cQZiHgbBBcI)
- [El cortometraje "Strawberry"](https://vimeo.com/209189656/e5510a6064)
- [El cortometraje "The Place"](https://youtu.be/fgcIVhNvsII)
- [El documental "El alcalde"](https://youtu.be/aDvOnhssTcc)
- [El documental "La creación y la Nakba 1948"](https://youtu.be/Bwy-Rf15UIs)
- [El documental "Ocupación 101"](https://youtu.be/C56QcWOGSKk)
- [El documental "La sombra de la ausencia"](https://vimeo.com/220119035)
- [El documental "Los que no existen"](https://youtu.be/2WZ_7Z6vbsg)
- [El documental "Como dijo el poeta"](https://vimeo.com/220116068)
- [El documental "Cinco cámaras rotas"](https://youtu.be/TZU9hYIgXZw)
- [El largometraje "Paradise Now"](https://vimeo.com/510883804)
- [El cortometraje "Abnadam"](https://youtu.be/I--r85cOoXM)
- [El largometraje "Bodas de Galilea"](https://youtu.be/dYMQw7hQI1U)
- [El largometraje "Kofia"](https://vimeo.com/780695653)
- [El largometraje documental "Slingshot Hip Hop"](https://youtu.be/hHFlWE3N9Ik)
- [El largometraje documental "Tel Al-Zaatar"](https://youtu.be/Ma8H3sEbqtI)
- [El largometraje documental "Tal al-Zaatar - Detrás de la batalla"](https://youtu.be/Ma8H3sEbqtI)
- [El documental "In the Grip of the Resistance"](https://youtu.be/htJ10ACWQJM)
- [El documental "Swings"](https://youtu.be/gMk-Zi9vTGs)
- [El documental "Naji al-Ali es un artista visionario"](https://youtu.be/Y31yUi4WVsU)
- [El documental "La puerta superior"](https://vimeo.com/433362585)
- [El largometraje documental "En busca de Palestina"](https://vimeo.com/184213685?1)
- [El largometraje "La sal de este mar"](https://bit.ly/3c10G3Z)
- [El largometraje documental "Hakki Ya Bird"](https://youtu.be/wdkoxBjKM1Q)
- [La serie "Palestina Al-Taghriba"](https://bit.ly/3bXNAVp)
- [La serie "Yo soy Jerusalén"](https://bit.ly/3hG8sDV)

feat(pipx#Upgrading python version of all your pipx packages): Upgrading python version of all your pipx packages

If you upgrade the main python version and remove the old one (a dist upgrade) then you won't be able to use the installed packages.

If you're lucky enough to have the old one you can use:

```
pipx reinstall-all --python <the Python executable file>
```

Otherwise you need to export all the packages with `pipx list --json > ~/pipx.json`

Then reinstall one by one:

```bash
set -ux
if [[ -e ~/pipx.json ]]; then
	for p in $(cat ~/pipx.json | jq -r '.venvs[].metadata.main_package.package_or_url'); do
		pipx install $p
	done
fi
```

The problem is that this method does not respect the version constrains nor the injects, so you may need to debug each package a bit.

feat(pretalx#Install): Install

NOTE: it's probably too much for a small event.

**[Docker compose](https://github.com/pretalx/pretalx-docker)**

[The default docker compose doesn't work](https://github.com/pretalx/pretalx-docker/issues/75) as it still uses [mysql which was dropped](https://pretalx.com/p/news/releasing-pretalx-2024-3-0/). If you want to use sqlite just remove the database configuration.

```yaml
---
services:
  pretalx:
    image: pretalx/standalone:v2024.3.0
    container_name: pretalx
    restart: unless-stopped
    depends_on:
      - redis
    environment:
      # Hint: Make sure you serve all requests for the `/static/` and `/media/` paths when debug is False. See [installation](https://docs.pretalx.org/administrator/installation/#step-7-ssl) for more information
      PRETALX_FILESYSTEM_MEDIA: /public/media
      PRETALX_FILESYSTEM_STATIC: /public/static
    ports:
      - "127.0.0.1:80:80"
    volumes:
      - ./conf/pretalx.cfg:/etc/pretalx/pretalx.cfg:ro
      - pretalx-data:/data
      - pretalx-public:/public

  redis:
    image: redis:latest
    container_name: pretalx-redis
    restart: unless-stopped
    volumes:
      - pretalx-redis:/data

volumes:
  pretalx-data:
  pretalx-public:
  pretalx-redis:
```

I was not able to find the default admin user so I had to create it manually. Get into the docker:

```bash
docker exec -it pretalx bash
```

When you run the commands by default it uses another database file `/pretalx/src/data/db.sqlite3`, so I removed it and created a symbolic link to the actual place of the database `/data/db.sqlite`

```bash
pretalxuser@82f886a58c57:/$ rm /pretalx/src/data/db.sqlite3
pretalxuser@82f886a58c57:/$ ln -s /data/db.sqlite3 /pretalx/src/data/db.sqlite3
```

Then you can create the admin user:

```bash
python -m pretalx createsuperuser
```

fix(python_plugin_system): Write python plugins with entrypoints
diff --git a/docs/python_plugin_system.md b/docs/python_plugin_system.md
index 9415d78e38..b5da491c19 100644
--- a/docs/python_plugin_system.md
+++ b/docs/python_plugin_system.md
@@ -7,8 +7,161 @@ author: Lyz
 When building Python applications, it's good to develop the core of your
 program, and allow extension via plugins.

Since [python 3.8 this is native thanks to entry points for plugins!](https://setuptools.pypa.io/en/latest/userguide/entry_point.html#entry-points-for-plugins)

Let us consider a simple example to understand how we can implement entry points corresponding to plugins. Say we have a package `timmins` with the following directory structure:

```
timmins
├── pyproject.toml        # and/or setup.cfg, setup.py
└── src
    └── timmins
        └── __init__.py
```

and in `src/timmins/__init__.py` we have the following code:

```python
def display(text):
    print(text)

def hello_world():
    display('Hello world')
```

Here, the `display()` function controls the style of printing the text, and the `hello_world()` function calls the `display()` function to print the text `Hello world`.

Now, let us say we want to print the text `Hello world` in different ways. Say we want another style in which the text is enclosed within exclamation marks:

```
!!! Hello world !!!
```

Right now the `display()` function just prints the text as it is. In order to be able to customize it, we can do the following. Let us introduce a new group of entry points named `timmins.display`, and expect plugin packages implementing this entry point to supply a `display()`-like function. Next, to be able to automatically discover plugin packages that implement this entry point, we can use the `importlib.metadata` module, as follows:

```python
from importlib.metadata import entry_points
display_eps = entry_points(group='timmins.display')
```

Note: Each `importlib.metadata.EntryPoint` object is an object containing a `name`, a `group`, and a `value`. For example, after setting up the plugin package as described below, `display_eps` in the above code will look like this:

```python
(
EntryPoint(name='excl', value='timmins_plugin_fancy:excl_display', group='timmins.display'),
...,
)
```

`display_eps` will now be a list of `EntryPoint` objects, each referring to `display()`-like functions defined by one or more installed plugin packages. Then, to import a specific `display()`-like function - let us choose the one corresponding to the first discovered entry point - we can use the `load()` method as follows:

```python
display = display_eps[0].load()
```

Finally, a sensible behaviour would be that if we cannot find any plugin packages customizing the `display()` function, we should fall back to our default implementation which prints the text as it is. With this behaviour included, the code in `src/timmins/**init**.py` finally becomes:

```python
from importlib.metadata import entry_points
display_eps = entry_points(group='timmins.display')
try:
    display = display_eps[0].load()
except IndexError:
    def display(text):
        print(text)

def hello_world():
    display('Hello world')
```

That finishes the setup on timmins’s side. Next, we need to implement a plugin which implements the entry point `timmins.display`. Let us name this plugin timmins-plugin-fancy, and set it up with the following directory structure:

```
timmins-plugin-fancy
├── pyproject.toml # and/or setup.cfg, setup.py
└── src
    └── timmins_plugin_fancy
        └── __init__.py
```

And then, inside `src/timmins_plugin_fancy/**init**.py`, we can put a function named `excl_display()` that prints the given text surrounded by exclamation marks:

```python
def excl_display(text):
    print('!!!', text, '!!!')
```

This is the `display()`-like function that we are looking to supply to the timmins package. We can do that by adding the following in the configuration of `timmins-plugin-fancy`:
`pyproject.toml`

```toml

[project.entry-points."timmins.display"]
excl = "timmins_plugin_fancy:excl_display"
```

Basically, this configuration states that we are a supplying an entry point under the group `timmins.display`. The entry point is named excl and it refers to the function `excl_display` defined by the package `timmins-plugin-fancy`.

Now, if we install both `timmins` and `timmins-plugin-fancy`, we should get the following:

```python
>>> from timmins import hello_world

>>> hello_world()
!!! Hello world !!!
```

whereas if we only install `timmins` and not `timmins-plugin-fancy`, we should get the following:

```python
>>> from timmins import hello_world

>>> hello_world()
Hello world
```

Therefore, our plugin works.

Our plugin could have also defined multiple entry points under the group `timmins.display`. For example, in `src/timmins_plugin_fancy/**init**.py` we could have two `display()`-like functions, as follows:

```python
def excl_display(text):
    print('!!!', text, '!!!')

def lined_display(text):
    print(''.join(['-' for * in text]))
    print(text)
    print(''.join(['-' for _ in text]))
```

The configuration of `timmins-plugin-fancy` would then change to:

```toml
[project.entry-points."timmins.display"]
excl = "timmins_plugin_fancy:excl_display"
lined = "timmins_plugin_fancy:lined_display"
```

On the `timmins` side, we can also use a different strategy of loading entry points. For example, we can search for a specific display style:

```python
display_eps = entry_points(group='timmins.display')
try:
    display = display_eps['lined'].load()
except KeyError: # if the 'lined' display is not available, use something else
    ...
```

Or we can also load all plugins under the given group. Though this might not be of much use in our current example, there are several scenarios in which this is useful:

```python
display_eps = entry_points(group='timmins.display')
for ep in display_eps:
    display = ep.load() # do something with display
    ...
```

Another point is that in this particular example, we have used plugins to customize the behaviour of a function (`display()`). In general, we can use entry points to enable plugins to not only customize the behaviour of functions, but also of entire classes and modules.

In summary, entry points allow a package to open its functionalities for customization via plugins. The package soliciting the entry points need not have any dependency or prior knowledge about the plugins implementing the entry points, and downstream users are able to compose functionality by pulling together plugins implementing the entry points.

feat(remote_machine_learning): analyse the state of the art of remote machine learning solutions

Recent ML models (whether LLMs or not) can often require more resources than available on a laptop.

For experiments and research, it would be very useful to be able to serve ML models running on machines with proper computing resources (RAM/CPU/GPU) and run remote inference through gRPC/HTTP.

Note: I didn't include [skypilot](https://docs.skypilot.co/en/latest/overview.html) but it also looks promising.

**Specs**
- support for batch inference
- open source
- actively maintained
- async and sync API
- K8s compatible
- easy to deploy a new model
- support arbitrary ML models
- gRPC/HTTP APIs

**Candidates**
**[vLLM](https://docs.vllm.ai/en/latest/)**
Pros:
- trivial do deploy and use

Cons:
- only support recent ML models

**[Kubeflow](https://www.kubeflow.org/docs/started/introduction/) + [Kserve](https://www.kubeflow.org/docs/external-add-ons/kserve/)**
Pros:
- tailored for k8s and serving
- kube pipeline for training

Cons:
- Kserve is not framework agnostic: inference runtimes need to be implemented to be available (currently there are a lot of them available, but that imply latency when a new framework/lib pops up)

**[BentoML](https://bentoml.com/)**
Pros:
- agnostic/flexible
Cons:
- only a shallow [integration with k8s](https://github.com/bentoml/Yatai)

**[Nvidia triton](https://developer.nvidia.com/triton-inference-server)**
Cons:
- only for GPU/Nvidia backed models, no traditional models

**[TorchServer](https://pytorch.org/serve/)**
Cons:
	- limited maintainance
	- only for torch models, not traditional ML

**[Ray](https://docs.ray.io/en/latest) + [Ray Serve](https://docs.ray.io/en/latest/serve/index.html)**
Pros:
- fits very well with [K8s](https://docs.ray.io/en/latest/serve/production-guide/kubernetes.html) (from a user standpoint at least). Will allow to easily elastically deploy ML models (a single model) and apps (a more complex ML workflow)
- inference framework agnostic
- [vLLM support](https://docs.ray.io/en/latest/serve/llm/overview.html)
- seems to be the most popular/active project at the moment
- support training + generic data processing: tasks and DAG of taks. Very well suited to ML experiments/research
- tooling/monitoring tools to monitor inference + metrics for grafana

Cons:
- a ray operator node is needed to manage worker nodes (can we use keda or something else to shut it down when not needed ?)
- ray's flexibility/agnosticity comes at the cost of some minor boilerplate code to be implemented (to expose a HTTP service for instance)

Ray comes in the first place, followed by Kserve.

feat(rofi): Deprecate in favour of fzf

DEPRECATED: [Use fzf instead](https://medium.com/njiuko/using-fzf-instead-of-dmenu-2780d184753f)

feat(sanoid#ERROR: No valid lockfile found - Did a rogue process or user update or delete it?): Troubleshoot ERROR: No valid lockfile found - Did a rogue process or user update or delete it?

Usually it's because many sanoid commands are running at the same time. This is often the case if you're doing a zfs scrub as sanoid commands take longer to run.

feat(smartctl): Add safety note

You can run the tests even [if the disk is in use](https://superuser.com/questions/631377/is-it-safe-to-use-to-disk-if-extended-smart-check-is-in-progress) as the checks do not modify anything on your disk and thus will not interfere with your normal usage.

feat(sonarr#Protect sonarr behind authentik): Protect sonarr behind authentik

We'll protect sonarr using it's HTTP Basic Auth behind authentik. To do that we need to save the Basic auth credentials into the `sonarr admin` group:

```terraform
resource "authentik_group" "sonarr_admin" {
  name         = "sonarr admin"
  is_superuser = false
  attributes = jsonencode(
    {
      sonarr_password = "<the password>"
      sonarr_user     = "<the user>"
    }
  )
  users = [
    data.authentik_user.<your_user>.id,
  ]
}
```

Then we'll configure the provider proxy to use these credentials.

```terraform

variable "sonarr_url" {
  type        = string
  description = "The url to access the service."
}

variable "sonarr_internal_url" {
  type        = string
  description = "The url authentik proxies the traffic to reach sonarr."
  default     = "http://sonarr:8989"
}

variable "sonarr_icon" {
  type        = string
  description = "The icon shown in the application"
  default     = "/application-icons/sonarr.svg"
}

resource "authentik_provider_proxy" "sonarr" {
  name                          = "sonarr"
  internal_host                 = var.sonarr_internal_url
  external_host                 = var.sonarr_url
  authorization_flow            = data.authentik_flow.default-authorization-flow.id
  basic_auth_enabled            = true
  basic_auth_password_attribute = "sonarr_password"
  basic_auth_username_attribute = "sonarr_user"
  invalidation_flow             = data.authentik_flow.default-provider-invalidation-flow.id
  internal_host_ssl_validation  = false
  access_token_validity         = "minutes=120"
}

resource "authentik_application" "sonarr" {
  name              = "Sonarr"
  slug              = "sonarr"
  meta_icon         = var.sonarr_icon
  protocol_provider = authentik_provider_proxy.sonarr.id
  lifecycle {
    ignore_changes = [
      # The terraform provider is continuously changing the attribute even though it's set
      meta_icon,
    ]
  }
}

resource "authentik_policy_binding" "sonarr_admin" {
  target = authentik_application.sonarr.uuid
  group  = authentik_group.sonarr_admin.id
  order  = 1
}
resource "authentik_policy_binding" "sonarr_admin" {
  target = authentik_application.sonarr.uuid
  group  = authentik_group.admins.id
  order  = 1
}

resource "authentik_outpost" "default" {
  name               = "authentik Embedded Outpost"
  service_connection = authentik_service_connection_docker.local.id
  protocol_providers = [
    authentik_provider_proxy.sonarr.id,
  ]
}
```

If you try to copy paste the above terraform code you'll see that there are some missing resources, most of them are described [here](wg-easy.md)

feat(tracker_manager#Comparison between jackett and prowlarr): Comparison between jackett and prowlarr

Both Jackett and Prowlarr are indexer management applications commonly used with media automation tools like Sonarr and Radarr. Here's how they compare:

**Similarities**

- Both serve as proxy servers that translate searches from applications into queries that torrent trackers and usenet indexers can understand
- Both can integrate with services like Sonarr, Radarr, Lidarr, and Readarr
- Both are open-source projects
- Both support a wide range of indexers

**[Jackett](https://github.com/Jackett/Jackett)**

- **Age**: Older, more established project
- **Popular**: It has 1.4k forks and 13.1k stars
- **More active**: In the [last month](https://github.com/Jackett/Jackett/pulse/monthly) as of 2025-03 it had activity on 117 issues and 11 pull requests
- **Architecture**: Standalone application with its own web interface
- **Integration**: Requires manual setup in each \*arr application with individual API keys
- **Updates**: Requires manual updating of indexers
 **Configuration**: Each client (\*arr application) needs its own Jackett instance configuration
- **Proxy Requests**: All search requests go through Jackett

**[Prowlarr](https://github.com/Prowlarr/Prowlarr)**

- **Origin**: Developed by the Servarr team (same developers as Sonarr, Radarr, etc.)
- **Architecture**: Follows the same design patterns as other \*arr applications
- **Integration**: Direct synchronization with other \*arr apps
- **Updates**: Automatically updates indexers
- **Configuration**: Centralized management - configure once, sync to all clients
- **API**: Native integration with other \*arr applications
- **History**: Keeps a search history and statistics
- **Notifications**: Better notification system
- **Less popular**: It has 213 forks and 4.6k stars
- **Way less active**: In the [last month](https://github.com/Prowlarr/Prowlarr/pulse/monthly) as of 2025-03 it had activity on 10 issues and 3 pull requests
- Supports to be protected behind [authentik](authentik.md) and still be protected with basic auth

**Key Differences**

1. **Management**: Prowlarr can push indexer configurations directly to your \*arr applications, while Jackett requires manual configuration in each app
2. **Maintenance**: Prowlarr generally requires less maintenance as it handles updates more seamlessly
3. **UI/UX**: Prowlarr has a more modern interface that matches other \*arr applications
4. **Integration**: Prowlarr was specifically designed to work with the \*arr ecosystem

**Recommendation**

Prowlarr is generally considered the superior option if you're using multiple \*arr applications, as it offers better integration, centralized management, and follows the same design patterns as the other tools in the ecosystem. However, Jackett still is more popular and active, works well and might be preferable if you're already using it and comfortable with its setup.

feat(wake_on_lan#Testing that the packages arrive): Testing that the packages arrive

With `nc` you can listen on an udp port. The magic packet usually is sent to port 9 via broadcast. So, the command would be:

```bash
nc -ul 9
```

Depending on the `nc` implementation, you may also need to provide the `-p` flag:

```bash
nc -ul -p 9
```

To test it use the wakeonlan command...

```bash
wakeonlan <your-ip> <your-mac>
```

...and see in the `nc` terminal the output.

feat(wake_on_lan#Configure the wakeonlan as a cron): Configure the wakeonlan as a cron

On the device you want to trigger the wakeonlan add the next cron `crontab -e`

```cron
*/10 * * * * systemd-cat -t wake_on_lan wakeonlan -i <your ip> <your mac>
```

feat(wake_on_lan#Monitor the wakeonlan ): Monitor the wakeonlan

To check that it's running you can create the next loki alert

```yaml
      - alert: WakeOnLanNotRunningError
        expr: |
          (count_over_time({syslog_identifier="wake_on_lan"} [1h]) or on() vector(0)) == 0
        for: 5m
        labels:
          severity: warning
        annotations:
          summary: "The cronjob that wakes on lan is not working"
          message: 'Check the logs of {job="systemd-journal", syslog_identifier="wake_on_lan"}'
```

feat(wallabag): Suggest to try karakeep

NOTE: check out [karakeep](https://github.com/karakeep-app/karakeep) ([home](https://karakeep.app/)), it may be a better solution

feat(wg_easy#With docker): Install with docker

If you want to use the prometheus metrics [you need to use a version greater than 14](https://github.com/wg-easy/wg-easy/issues/1373), as `15` is [not yet released](https://github.com/wg-easy/wg-easy/pkgs/container/wg-easy/versions) (as of 2025-03-20) I'm using `nightly`.

Tweak the next docker compose to your liking:

```yaml
---
services:
  wg-easy:
    environment:
      - WG_HOST=<the-url-or-public-ip-of-your-server>
      - WG_PORT=<select the wireguard port>

    # Until the 15 tag exists (after the release of 15, then you can change it to 15)
    # https://github.com/wg-easy/wg-easy/pkgs/container/wg-easy/versions
    image: ghcr.io/wg-easy/wg-easy:nightly
    container_name: wg-easy
    networks:
      wg:
        ipv4_address: 10.42.42.42
      wg-easy:
    volumes:
      - wireguard:/etc/wireguard
      - /lib/modules:/lib/modules:ro
    ports:
      - "<select the wireguard port>:<select the wireguard port/udp"
    restart: unless-stopped
    healthcheck:
      test: /usr/bin/timeout 5s /bin/sh -c "/usr/bin/wg show | /bin/grep -q interface || exit 1"
      interval: 1m
      timeout: 5s
      retries: 3
    cap_add:
      - NET_ADMIN
      - SYS_MODULE
    sysctls:
      - net.ipv4.ip_forward=1
      - net.ipv4.conf.all.src_valid_mark=1
      - net.ipv6.conf.all.disable_ipv6=1

networks:
  wg:
    driver: bridge
    enable_ipv6: false
    ipam:
      driver: default
      config:
        - subnet: 10.42.42.0/24
  wg-easy:
    external: true

volumes:
  wireguard:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: /data/apps/wg-easy/wireguard
```

Where:

- I usually save the compose file at `/data/apps/wg-easy`
- I've disabled ipv6, go to the official docker compose if you want to enable it
- I'm not exposing the admin web interface directly, if you want to, use the port 51821. Instead I'm going to use [authentik](authentik.md) to protect the service. That's why I'm not using [the `PASSWORD_HASH`](https://github.com/wg-easy/wg-easy/blob/production/How_to_generate_an_bcrypt_hash.md). To even protect it further, only the authentik and prometheus dockers will have network access to the `wg-easy` one. So in theory no unauthorised access should occur.
- The `wg-easy` is the external network I'm creating to connect this docker to authentik and prometheus](docker.md#limit-the-access-of-a-docker-on-a-server-to-the-access-on-the-docker-of-another-server)
- You'll need to add the `wg-easy` network to the `authentik` docker-compose.

The systemd service to start `wg-easy` is:

```
[Unit]
Description=wg-easy
Requires=docker.service
After=docker.service

[Service]
Restart=always
User=root
Group=docker
WorkingDirectory=/data/apps/wg-easy
TimeoutStartSec=100
RestartSec=2s
ExecStart=/usr/bin/docker compose -f docker-compose.yaml up
ExecStop=/usr/bin/docker compose -f docker-compose.yaml down

[Install]
WantedBy=multi-user.target
```

To forward the traffic from nginx to authentik use this site config:

```
server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name vpn.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    location / {
        include /config/nginx/proxy.conf;
        include /config/nginx/resolver.conf;
        set $upstream_app authentik;
        set $upstream_port 9000;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

        proxy_set_header Range $http_range;
        proxy_set_header If-Range $http_if_range;
    }
}
```

To configure authentik to forward the traffic to `wg-easy` use this terraform code:

```terraform

variable "wg_easy_url" {
  type        = string
  description = "The url to access the service."
}

variable "wg_easy_internal_url" {
  type        = string
  description = "The url authentik proxies the traffic to reach wg_easy."
  default     = "http://wg-easy:51821"
}

variable "wg_easy_icon" {
  type        = string
  description = "The icon shown in the application"
  default     = "/application-icons/wireguard.png"
}

resource "authentik_provider_proxy" "wg_easy" {
  name                         = "wg_easy"
  internal_host                = var.wg_easy_internal_url
  external_host                = var.wg_easy_url
  authorization_flow           = data.authentik_flow.default-authorization-flow.id
  invalidation_flow            = data.authentik_flow.default-provider-invalidation-flow.id
  internal_host_ssl_validation = false
  access_token_validity        = "minutes=120"
}

resource "authentik_application" "wg_easy" {
  name              = "Wireguard"
  slug              = "wireguard"
  meta_icon         = var.wg_easy_icon
  protocol_provider = authentik_provider_proxy.wg_easy.id
  lifecycle {
    ignore_changes = [
      # The terraform provider is continuously changing the attribute even though it's set
      meta_icon,
    ]
  }
}

resource "authentik_policy_binding" "wg_easy_admin" {
  target = authentik_application.wg_easy.uuid
  group  = authentik_group.admins.id
  order  = 0
}

resource "authentik_outpost" "default" {
  name               = "authentik Embedded Outpost"
  service_connection = authentik_service_connection_docker.local.id
  protocol_providers = [
    authentik_provider_proxy.wg_easy.id,
  ]
}

resource "authentik_service_connection_docker" "local" {
  name  = "Local Docker connection"
  local = true
}

data "authentik_flow" "default_invalidation_flow" {
  slug = "default-invalidation-flow"
}

data "authentik_flow" "default-authorization-flow" {
  slug = "default-provider-authorization-implicit-consent"
}

data "authentik_flow" "default-authentication-flow" {
  slug = "default-authentication-flow"
}

data "authentik_flow" "default-provider-invalidation-flow" {
  slug = "default-provider-invalidation-flow"
}

```

feat(wg_easy#Split tunneling): Split tunneling

If you only want to route certain ips through the vpn you can use the AllowedIPs wireguard configuration. You can set them in the `WG_ALLOWED_IPS` docker compose environment variable

```bash
WG_ALLOWED_IPS=1.1.1.1,172.27.1.0/16
```

It's important to keep the DNS inside the allowed ips.
@lunny lunny mentioned this pull request May 1, 2025
@Zettat123 Zettat123 force-pushed the support-actions-concurrency branch from 366d438 to a73ff44 Compare May 1, 2025 22:01
@Zettat123 Zettat123 force-pushed the support-actions-concurrency branch from a73ff44 to 5b12954 Compare May 1, 2025 22:03
@@ -0,0 +1,29 @@
// Copyright 2025 The Gitea Authors. All rights reserved.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since v1.24 have been freeze, please move this to v1_25

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm/need 2 This PR needs two approvals by maintainers to be considered for merging. modifies/go Pull requests that update Go code modifies/migrations pr/breaking Merging this PR means builds will break. Needs a description what exactly breaks, and how to fix it!
Projects
None yet
5 participants