Introduction
In the previous article, an GCP VM was being set up through a Gitea Action and Docker has been installed on it. This blog post now discovers how to set up a basic NGINX webserver on the GCP VM and how to also set up Grafana and Prometheus for monitoring on the OCI VM. The challenge here is to connect the services running on two different systems and export metrics from the webserver via the metric collector Prometheus and display them in Grafana. As a special task I will show how to collect information about the IP addresses accessing the webserver, gather information about the requesters' locations and display those as coordinates on a map in a Grafana dashboard.
Tests
Monitoring of the services cannot be done on the target VM since it is regarded to be unsecure and has to be disposable. That means that the metric collector Prometheus and the visualization tool Grafana need to run on the OCI VM. However, the NGINX webserver itself needs to be publicly available and therefore run on the GCP VM. The target VM is not able to connect to the jumphost due to firewall constraints, only the jumphost can access the target VM and query for data. Due to this setup a pull-based mechanism is necessary in which Prometheus regularly checks open endpoints on the target VM and pulls the newest metrics. Due to this it was not possible to use other popular metric tools like Promtail in combination with Loki. Another approach has been to try to configure Filebeat from the ELK stack but this was also to no avail.
Webserver and Exporters on Target
There are four components with which I could set up my workflow:
- The NGINX webserver: runs on port 80, serves a basic
index.html
file without JavaScript and is configured by anginx.conf
file. - A nginx-exporter: it offers metrics about the state of the webserver on port 9113 and can be regularly pulled by Prometheus
- A node-exporter: this service offers information like CPU or memory usage of the VM itself on port 9100 and is also connected to Prometheus
- A very specific geoip-exporter: this is a unique service to expose specific information about requests on the webserver ranging from time and date but also location determined by the source IP address of the request. This needs a special
Dockerfile.geoip-exporter
and a custom HTTP server that collects and transforms the IP logs to make them accessible for Prometheus. The second HTTP server is being run by thegeoip_exporter.py
script and uses the GeoLite2-City.mmdb database. This database can map a source IP address to a longitude and latitude coordinate and is needed for pinning down requests on the NGINX webserver on a map.
I abstain from pasting all the code of the files in the folder to this article here. You will probably just scp them on your own GCP VM or run the server_deploy
workflow in the Gitea repository.
Monitoring on Jumphost
The monitoring services can be found in the monitoring
folder of the GitHub repository and consist of the docker-compose.yml
for Grafana and Prometheus:
services:
prometheus:
image: prom/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
labels:
- traefik.enable=true
- traefik.http.routers.prometheus.rule=Host(`prometheus.paulelser.com`)
- traefik.http.routers.prometheus.entrypoints=websecure
- traefik.http.routers.prometheus.tls=true
- traefik.http.services.prometheus.loadbalancer.server.port=9090
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus_data:/prometheus
networks:
- gitea_network
grafana:
image: grafana/grafana
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin_password
- GF_SERVER_ROOT_URL=https://grafana.paulelser.com
- GF_SERVER_DOMAIN=grafana.paulelser.com
volumes:
- grafana_data:/var/lib/grafana
depends_on:
- prometheus
networks:
- gitea_network
labels:
- traefik.enable=true
- traefik.http.routers.grafana.rule=Host(`grafana.paulelser.com`)
- traefik.http.routers.grafana.entrypoints=websecure
- traefik.http.routers.grafana.tls=true
- traefik.http.services.grafana.loadbalancer.server.port=3000
volumes:
prometheus_data:
grafana_data:
networks:
gitea_network:
external: true
The second file in this directory is the prometheus.yml
:
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node-exporter'
static_configs:
- targets: ['target.paulelser.com:9100']
- job_name: 'nginx'
static_configs:
- targets: ['target.paulelser.com:9113']
- job_name: 'geoip-exporter'
static_configs:
- targets: ['target.paulelser.com:9114']
This code can be easily run natively on the jumphost with docker compose up -d
inside the monitoring folder and create the two containers. The Prometheus container will use the prometheus.yml
file and offers a UI on port 9090. Since it is also routed by Traefik, it can be accessed on prometheus.paulelser.com
with my DNS settings. On Status -> Targets you'll see specific information about the three exporters defined in the prometheus.yml
file and Prometheus itself.
The second service is Grafana which is accessible on grafana.paulelser.com
with my DNS settings after spinning up the Docker containers. Grafana offers the possibility to create own dashboards with the metrics that are being imported by Prometheus. You are very welcome to play a little bit with the UI. However, I also included a GeoIPDashboard.json
for an easy start that also shows a map with the locations of the requests on the webserver.
It should be possible to also trigger the monitoring_deploy
workflow through a Gitea Action but unfortunately I have not been able to get this running yet. Every help is appreciated 😉