[CloudStarter] Gitea Actions: Provisioning and Basic Configuration of Second VM

October 15, 2024

Installation of a Runner

Runners on Gitea can be self-hosted in two ways:

  1. As a binary that is run as a systemd service
  2. As a Docker container

I decided to use Docker containers since it is easier to set up and integrate in the rest of the workflow that you'll see soon. However, if you want to use systemd services, I have provided you with an installation script in the repository.

The Gitea runner can be used as a Docker container. For the sake of convenience and for having it connect to the Gitea instance, it makes much sense to add it as another service in the Docker compose file in the previous article.

  gitea_runner:
    image: gitea/act_runner:latest
    container_name: gitea_runner
    networks:
      - gitea_network
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      - GITEA_INSTANCE_URL=http://gitea:3000
      - GITEA_RUNNER_REGISTRATION_TOKEN=***your-token-here***
      - GITEA_RUNNER_NAME=gitea-runner-free-tier-instance
    deploy:
      resources:
        limits:
          cpus: '0.5'
          memory: 300M
    restart: always

The runner requires a GITEA_RUNNER_REGISTRATION_TOKEN that you can get by navigating to your Gitea -> Settings -> Actions -> Runners -> Create new Runner and then copy the registration token. I had to limit the memory of the runner to 300 MB since the VM already uses around 500 MB for the rest of the services and I experienced a couple hefty lags due to system overload.

Running a Runner

Create a new repository on your own Gitea server. There you need to add a folder .gitea and in it another folder called workflows. Our runner scripts reside in there.

The first step is to create a Gitea Action that checks whether the second VM is already created and if not uses Terraform to create one. It should then install Ansible and Docker to make the following workflows possible.

Actions can be triggered on many different inputs. My preferred way would have been to use a workflow_dispatch property that makes it possible to click on a button in the UI to start a workflow. Unfortunately, I didn't get it to work. In case you fix it, please let me know! What does work however, is to trigger the Action with a commit to the main branch and that's what happens here:

name: GCP VM Management
on:
  push:
    branches: [main]

env:
  PROJECT_ID: long-classifier-435414-r1
  VM_NAME: gcp-free
  ZONE: us-central1-f
  GITEA_URL: https://git.paulelser.com

jobs:
  manage-vm:
    runs-on: ubuntu-latest
    steps:
      - name: Manually clone repository
        env:
          GITHUB_PAT: ${{ secrets.GITHUB }}
        run: |
          git config --global url."https://api:${GITHUB_PAT}@github.com/".insteadOf "https://github.com/"
          git clone --depth 1 https://github.com/PaulElser/DevOps.git devops

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2

      - name: Authenticate to Google Cloud
        uses: google-github-actions/auth@v1
        with:
          credentials_json: ${{ secrets.GCP_SA_KEY }}

      - name: Set up gcloud CLI
        uses: google-github-actions/setup-gcloud@v1
        with:
          project_id: ${{ env.PROJECT_ID }}
          service_account_key: ${{ secrets.GCP_SA_KEY }}

      - name: Check VM status
        id: vm_status
        run: |
          VM_STATUS=$(gcloud compute instances describe ${{ env.VM_NAME }} \
            --zone ${{ env.ZONE }} \
            --format="value(status)" 2>/dev/null || echo "NOT_FOUND")
          echo "status=$VM_STATUS" >> $GITHUB_OUTPUT
          echo "VM Status: $VM_STATUS"

      - name: Create VM if not exists
        if: steps.vm_status.outputs.status == 'NOT_FOUND'
        run: |
          cd devops/terraform/gcp
          terraform init
          terraform apply -auto-approve

      - name: Get VM IP
        id: vm_ip
        run: |
          VM_IP=$(gcloud compute instances describe ${{ env.VM_NAME }} \
            --zone ${{ env.ZONE }} \
            --format='get(networkInterfaces[0].accessConfigs[0].natIP)')
          echo "ip=$VM_IP" >> $GITHUB_OUTPUT
          echo "VM IP: $VM_IP"

      - name: Setup SSH key and get host key
        run: |
          mkdir -p ~/.ssh
          echo "${{ secrets.VM_SSH_PRIVATE_KEY }}" > ~/.ssh/ssh-key-oraclevm.key
          echo < ~/.ssh/ssh-key-oraclevm.key
          chmod 600 ~/.ssh/ssh-key-oraclevm.key
          ssh-keyscan -H ${{ steps.vm_ip.outputs.ip }} >> ~/.ssh/known_hosts

      - name: Install Ansible
        run: |
          sudo apt install -y python3-pip
          pip3 install ansible

      - name: Configure VM
#        if: steps.vm_status.outputs.status != 'NOT_FOUND'
        env:
          VM_IP: ${{ steps.vm_ip.outputs.ip }}
        run: |
          # update the inventory file with the GCP VM IP
          sed -i "s/{{ gcp_vm_ip }}/${{ steps.vm_ip.outputs.ip }}/" devops/ansible/inventory.yml

          ansible-playbook -i devops/ansible/inventory.yml \
            --limit gcp_vm \
            devops/ansible/playbooks/install_docker.yml

Note that the script uses three secrets that you have to manually add to the repo before:

  1. GCP_SA_KEY: This is the key of a service account that you create in GCP under "IAM and Management" and then "Service Accounts"
  2. GITHUB: In my setup I am cloning from the public GitHub repository and using those files (e.g. the Terraform and Ansible scripts) to do the rest of the logic. However, to grant access to the repo, you have to add a personal access token from GitHub. You can create one when clicking on Settings -> Developer settings -> Personal access tokens. I prefer to use fine-grained tokens and give access only to the selected DevOps repository, and just the read-only accesses to Contents and Metadata. It makes sense to use the least amount of privileges necessary to do the job and automatically expire the PAT after 7 days.
  3. VM_SSH_PRIVATE_KEY: This is the original SSH private key that we were already using to access the OCI VM. It is the same for our Google VM and we have already added the corresponding public key as a metadata in the main.tf for the GCP VM creation with Terraform.

Now if you add all and run git add . && git commit -m "Gentlemen, start the engines!" && git push you trigger an action and should be able to create a second GCP VM and configure it automatically with Ansible and Docker. Congratulations! 🎉

Running services

Beyond the basic configuration of Docker on the GCP VM there are other services to configure. Also the OCI VM needs further adaptations. Since both parts are interconnected, they will be covered in the following blog post.