Serving static files with django: three different ways

Dealing with static files in Django can be confusing on the first project. What does it even mean to serve static files with django? Who hasn’t reached to StackOverflow when setting debug=False and no images are showing up in the site? The thing is, django only serves static files as an amenity when developing. Once we get to the production stage, it needs to be handled by something else. But first, what are static files?

What are statistic files?

Static files are files that are not generated by the server. Think of the css files, the favicon or images uploaded by your users. They are good candidates for caching and smart delivery since they do not depend on the logic that is present in your django application to be created. 

On the other hand, an example of a non-static file is the HTML templates that are shown to the visitor. To be created, they depend on the information stored in the database, or if the user is logged in,for example, so they cannot be decoupled from django. 

So, today we will be talking how to fullfil the specific needs of serving static files in django.

I will present three different options, ranging from super simple to enterprisey. 

Three options to serve static files with django

Serving static files with a simple django extension

Ficheiro:Django logo.svg – Wikipédia, a enciclopédia livre

Whitenoise is a Django extension that enables Django to serve static files. To get started just use a simple pip install and off you go. However, you still need to be running django in a uwsgi server, but you’ll be doing that anyway since we are in a production environment, right? This makes whitenoise particularly suitable for running in PaaS offerings like Heroku or digital ocean apps. It is also a no-brainer if your app is not complicated in terms of static files ( composed of just some css and favicons). The cherry on top is that it does some caching and when paired with a CDN like Cloudflare it is a very practical solution without having to worry about configuring services. 

The traditional approach: nginx 

promozilla uses nginx to serve static files

Pairing a dedicated server with django to serve your static files, and this is most common way to get started. A dedicated server like Nginx enables very fine-tuned control of the settings and enables offloading that workload to a dedicated component while your django app can be focused on processing the payments of your SaaS. The biggest drawback is that it is another thing to maintain and configure – which can be quite startling for the first time.

This ended up being the approach I went for with promozilla. To be honest, I’m very happy with it, after some initial configuration it was fire and forget.

I decided to use nginx to serve the static files. I wrote more in the article I wrote about its architecture. Nginx is a true Swiss army knife since it also does reverse proxy work, load balancer and a lot more.

Lastly, Nginx needs to communicate with django to actually retrieve your sites static files, so not forget to account for that. Using docker-compose this is quite simple using a shared volume. 

The big guns: Object Storage Services

This one is the most complex but also most scalable. This is the way to go if you have lots of static files like your app is provided file downloading, video streaming, etc.

DigitalOcean Spaces
DigitalOcean spaces

It revolves around bringing a third-party service like Amazon S3 or DigitalOcean Spaces to serve your files. Django-storages is an extension that does the heavy lifting for you. This enables a lot of optimization like CDNs so a visitor will fetch the static files from the closest server to him, not just yours, and this reduces loading time. Another good point is this approach provides high reliability (like 11 9’s) and practically infinite scalability – with costs, of course.

One thing to notice is that you will be hosting your data in a third party, with a trade-off of one less thing to worry about and higher costs. 

Minio provides object storage with Amazon S3’s interface in a container format if you need to host it locally.

Conclusion

We just discussed what are static files and to deal with them in the context of a django application. We have three solutions, from a simple extension to full-blown object storage services that are suitable to any application. Lastly, although I am using Nginx in promozilla, I would recommend whitenoise for a starting project and if needed move up the complexity.

You might like

How to add privacy friendly analytics to your Django website

  1. How to architect a django website for the real world?
  2. Monitoring your django site: how to and first steps
  3. How to add privacy friendly analytics to your Django website

What are web analytics?

Why do we need cookieless analytics in a django site? We now have a website running, and we want to measure its success. Today I’ll show you how I added web analytics to promozilla, my django side project, while respecting my users privacy.

Web analytics provide a way to understand how our website is being used, and who are the users. It collects metrics like referrers (where the users clicked to visit the site), search engine keywords, landing pages, countries of origin, preferred languages or device information like operating system, type of device or screen size.

The good thing is that it provides analytics on top of the metrics, so we can understand the general trends and our users preferences to make decisions. For example, if the bounce rate is much higher in mobile rather than desktop, maybe our website is not mobile friendly. Or maybe if our second biggest country is France, localizing to french might be a good idea.

The biggest contender in this space is without doubt Google Analytics, but it come with an hefty downside: our privacy. Since it stores cookies in our end-user’s computers, it is able to track them across sessions, but it also stores information that I think is not really needed for our case.

Promozilla

This is the third post in a series about my side project, promozilla. Promozilla is a Nintendo Switch promotion tracker built using Django. In the previous post, I showed how to add monitoring to our django website. Today I will focus on the bottom left corner of its architecture: privacy-friendly cookieless analytics.

Promozilla is a django site with cookieless analytics

Cookieless analytics

Come cookieless analytics. Cookieless analytics are perfect for a django website, because they provide analytics to track the site popularity without infringing on your users privacy and are easy to set up. Why? It only takes a single line of code and you do not even have to add those cookie consent banners!

Demo of data centurion analytics features
Source: Data Centurion

Data centurion?

There are many solutions in this space, like Matomo or Fanthom Analytics but I decided to settle for Data Centurion because it provides a nice free plan for our side project.The free plan comes with unlimited websites but only 1000 page views per month. The paid plans start at 2.99 euros per month, well under Matomo’s 29€ and Fanthom Analytics 14€ plans. This allows your site to grow without having to pay a big subscription upfront. Plus it’s a new contender in this space, and who doesn’t like a underdog?

How to add cookieless analytics to our django website?

1)Create account

You can create a free account in Data Centurion’s web page: https://datacenturion.io/register

2) Add website

Click on: https://datacenturion.io/websites/new and fill in the required information. I prefer to set the ignored ip addresses later. Don’t forget to click on the “Notifications” checkmark if you want to receive to tasty statistics. Lastly, hang on to the script at the end of the page- it will be useful later.

Data centurion new website page

3) Adding cookieless analytics to our django website – base template

We need to connect our website to our Data centurion account and we do that by putting that last snippet in a django html template. The script has some javascript that runs every time a page is loaded, it then sends the anonymized data back to the server. It is important to put this script in some place that is loaded in every page, to ensure our django website analytics are calculated everywhere. For example, promozilla has a base template every page inherits to have access to the static files and general page structure, like navbar, footer and content. I placed the snippet in the <header> element.

Django html template with data centurion tracking snippet

4) Ignore Ip addresses

One last thing: we do not want to skew our analytics and artificially inflate our stats – unless your self-esteem requires that. If you are not like that, you can simply add your ip address to a blacklist and so your activity will not count towards your site usage. Super easy:

  1. Just google “what is my ip address” and copy it
  2. Go to your website settings page in Data centurion
  3. Copy you address to the “blocked ips” text field and you are good to go!

Event tracking

Data centurion also has event tracking as feature, but unfortunately it is well hidden – there is no documentation about it. You can track specific actions of your site, like new accounts, sales, etc, and then you can do analytics on top of it. You can see more information about it in this medium post.

Conclusion

Web analytics are important tools to track our django website usage and measure its success. Cookieless analytics provide a way to achieve that without infringing our users privacy. Lastly, they are very easy to add to our site and some providers, like Data centurion, provide free plans, so there’s really no excuse to not add them.

  1. How to architect a django website for the real world?
  2. Monitoring your django site: how to and first steps
  3. How to add privacy friendly analytics to your Django website

You might like

Monitoring your django site: how to and first steps

  1. How to architect a django website for the real world?
  2. Monitoring your django site: how to and first steps
  3. How to add privacy friendly analytics to your Django website

This is the second post in my Promozilla series. Today I will discuss how I added monitoring to Promozilla, a Nintendo Switch promotion tracking website built with django. In the first post, I described the general architecture.

Why?

Monitoring our django site solves a very simple question. If we want our django website to be used by the world, it is nice for it to be running in the first place.

Monitoring enables us to do that, and preferably in a proactive way. Of course we can ssh in every day and read the logs to check for any issues, but that’s far from perfect.

First, we are not alerted if a problem arises, which means that the site can be down for hours or days before we realise it. Lastly, that’s not really clarifying since the logs may contain too much noise or missing some information.

So there has to be a better way! And there is!

It is fundamental to know the status of our applications. Today I’ll discuss about the purple section of the diagram: monitoring.

How?

Django’s ecosystem makes it very easy to monitor our website. With some dependencies installed and a little bit of configuration, we can be monitoring our django website in no time.

I designed promozilla monitoring to solve both of those issues. Bugsnag alerts me via email if any issue arises and grafana/prometheus allows me to have a global picture of the different components and how they are evolving over time (any degradation while I was away?).

Lastly, take into consideration that if you host your monitoring stack in the same place you host the rest of the infrastructure, if both go down you are out of luck.

The components

Error monitoring: Bugsnag for django

Bugsnag in its simplest core is a dashboard for exceptions. One problem in my previous projects was that the only way for me to know if the sites were up was to either a) visit them, b) have someone complain, c) read the logs. But this is no way to sleep nicely at night. Fortunately, bugsnag is good peace of mind creator.

It provides a very simple django middleware that can be integrated with your django project. Every time an unhandled exception occurs, you receive an email with its stack trace and some very nice details. This can be tuned, of course, but the value it brings out of the box with its free plan is tremendous. One strong point, since it is hosted outside your server: if it is is unreachable you’ll still be able to somehow see how it went down.

It has some pretty good documentation on how to start using bugsnag with django. I seriously recommend bugsnag.

Dashboards: Grafana

Grafana is a powerful monitoring tool. It contains tracing, logging and dashboarding functionalities. To keep things simple, I decided to just use grafana’s dashboard with django.

The dashboards grafana provides are ideal to understand not only business metrics (most popular pages on the site/ who is referring us), but also application metrics (number of errors, database connections, average response time, etc).

Below I’m showing the dashboard I built for promozilla. The first screenshot contains business metrics: visitors over time, number of new accounts and referrers. The second screenshot has service-quality metrics: response time, error rates and requests served.

Grafana dashboard with django site popularity metrics: page views, referrers and new accounts
The dashboard can contain details about our django website popularity: popular pages, page views and referrers
Grafana dashboard with django site popularity metrics:  latenct, requests and errors
Grafana dashboards are also very useful for stability and service monitoring of our django site: latency, requests and errors.

Here I am also showing metrics retrieved from traefik, my reverse proxy, which has Prometheus support out of the box.

If you want to have some inspiration, Grafana has a dashboard and widget showcase page and of course, documentation.

The important thing is: Grafana is not a data storage solution. We need to have a service responsible for just querying and storing our system metrics. For that, I added Prometheus into the mix.

Prometheus with django out of the box

Prometheus is the perfect storage solution to integrate with grafana. At it’s core it is a time-series database designed for metrics storing and querying.

The way it works is tremendously simple: services like django expose an endpoint that Prometheus periodically visits to collect the data. This is called a pull model, where it visits the application to collect the metrics, as opposed to a push model, where the applications send the metrics to the database. This makes everything simpler (e.g. prometheus can be offline and the applications are not affected).

Fortunately for us, django has an extension that exposes a boatload of metrics – but if you are curious, I created a tutorial on how to Create a custom django prometheus metric.

Also, the reverse proxy that I used, traefik, has Prometheus support out of the box, which enables some of the plots you see in the previous screenshots. This is the great thing about prometheus. Because it is so ubiquitous, you do not need to reinvent the wheel to add monitoring to your applications.

Lastly, Prometheus has a particular data model and query syntax when compared to more traditional query languages like SQL. It has some particular concepts like metrics (counter, gauge, histogram and summary) and dimensions that are worth getting familiar with before delving right into it.

Caution: in my experience, Prometheus is quite heavy on the RAM side – be careful if you are running your website in the same machine.

  1. How to architect a django website for the real world?
  2. Monitoring your django site: how to and first steps
  3. How to add privacy friendly analytics to your Django website

Django monitoring: conclusion

I think my django monitoring approach is very solid because it has alerting for when something goes wrong, with bugsnag, but also provides with a birds eye view of recent events and application changes with dashboards provided by grafana and prometheus. I hope this was useful and I’m welcome to any feedback!

You might like

How to architect a django website for the real world?

I found that most resources online are focused on building django websites locally, or for simple use cases that are not tailored to the open world. So, I decided to share the architecture of a recent project of mine, promozilla. I believe it is a good example of flexibility in systems interactions without compromisity ease of use.

What is promozilla.xyz?

Promozilla is a promotion tracker build using a framework I love, Django. It tracks Nintendo Switch games, consoles and accessories promotions on the Portuguese market. In a simple way, first, it scrapes the Portuguese stores every night, then stores the game prices and lastly sends an email to the Switch owners that have the game in their wish list when it is in a promotion.

I believe that this personal project ended with an interesting architecture, suitable for a production-ready django system and I would love to share it with you.

  1. How to architect a django website for the real world?
  2. Monitoring your django site: how to and first steps
  3. How to add privacy friendly analytics to your Django website

Django architecture? What?

First of all, what is a system architecture? For me, the architecture of a solution (in this case a promotion tracking website build using Django) consists of a description of its components, how they interact, and most importantly, why they were chosen.

Why this architecture for a django site?

First of all, this is a side-project, so I must find the technology interesting and/or a good learning opportunity (that’s the reason I chose grafana and prometheus, for example).

Then, its components should be like legos, meaning:

  • Adding or removing components does not break unrelated stuff
  • It is easy to deploy (they play well with docker containers, for example)
  • They play nicely together (django and postgres rather than django and mongodb, for example). This means that I can spend my time adding new features and not configuring stuff
  • They are easy to test and run locally
  • They are fun to work with. This is a side project after all!

How to implement this in practice? Django loves docker!

How did I do it? Easy! Docker and docker-compose! Docker provides the container technology and docker-compose the orchestration. For those that do not know what that means I recommend watching this video from Fireship.

I ended up with two docker-compose repositories, so two versions of this architecture. Oone for local development and another for the production environment. The local development repo builds the local images from the development code, but the production environment one pulls the images from the gitlab image registry during the CICD cycle.

I really like docker-compose for several reasons. First, it is very easy to add or remove services, second, the service configurations are kept separate from their secrets and it is nice to track everything wih git. Lastly, it is super simple to backup production data, since I just need to copy the service’s volumes and save them somewhere safe.

The architecture

This architecture has several components, from the ones the visitors interacts with directly, to the system monitoring, web analytics, the hosting and DNS, code repository, or CICD flows. Don’t worry, they will all be talked about.

In this post, I will only describe the components in the red rectangle since those are the ones the visitors interacts with directly. I will describe the other components in separate posts because this one was getting too large. Let’s get started!

The components

The star component: the django website

This is the breadwinner of the system and the reason why you clicked this article. This is the main piece of promozilla: it renders the pages the visitor clicks and handles persisting and retrieving user information (like registration, login, their product wishlist, etc), and of course, game prices. It runs with gunicorn to handle more than one visitor using the site at the same time.

You can of course visit it here: https://promozilla.xyz/

It is a fairly simple application technically, and it uses Materialize as the responsive frontend css framework (to run away from bootstrap). To make the development job easier it has some extensions installed like django-allauth, django-filter or djago-materializecss-form. Lastly is uses some other extensions to connect with the rest of the stack, like django-prometheus and celery.

It uses sendgrid to send the registration emails (more on that below), PostgreSQL as the database, and shares disk with nginx for it to serve the images and css.

Django celery worker

This application has some long-running tasks that do not make sense to be running in the main application. Why? They will take resources from the main responsibility of the system: displaying promotions. They are harder to scale separately and there are even technical limitations to that (in the case of celery-beat, for example). Some example of those long-running tasks are:

  • Notifying the users if a game on their wish list has a new promotion
  • Storing the scrapped prices in the database
  • Triggering the nightly runs for scrapping and email notification
  • Retrieving the product thumbnails from the store page.

So those tasks as perfect to a system built for long-running asynchronous tasks that support concurrency and distribution, or in another word, celery!

This worker is built using the celery integration with django. I chose it because these tasks share many of the components of the main app. This makes it much easier to share the data models, the application shared settings (like email and database connections, for example) and in general leads to less code duplication, which is always a nice plus.

To send and receive work, it communicates with the rest of the workers using RabbitMQ (more on that below). Lastly, some tasks must run every day (like the scrapping and notifications), so it uses celery-beat, celery’s message scheduler, as a cron job to trigger the messages.

It uses sendgrid to send the registration emails (again, more on that below), PostgreSQL as the database, and shares storage with the django application so that it can store the product thumbnails.

Store scraper celery worker

This is the heart of the operation: it contains the scraping logic to get listings of Nintendo Switch products in online Portuguese stores. This code was initially in the django worker app, but I decided to split it because it started having its own needs, for example, scrapping libraries and proxy logic, that could have side effects on the main app (e.g. dependencies versions, bloating, etc)

The flow is very simple: the scraper receives a rabbitMQ message through celery with a Store and Product (e.g. Fnac/Games) to scrape. It visits the corresponding site, does its magic and finishes by sending a message again with a list of the products, their prices and promotions status. This message is consumed by the django celery worker that stores its content on the database.

Because of its stealth technology, the scraper is also responsible for downloading the product thumbnails and storing them so Nginx can later serve them.

An email provider: Sendgrid

Like most websites, email is the main method promozilla uses to communicate with its users. Email is used in the registration and profile management flows (e.g. password resets, etc), for example. One could naively use their personal Gmail or host its own email server, but that comes with management headaches and a high risk of the emails being classified as spam. So I decided to use SendGrid for django here, connected with SMTP. SendGrid has a free plan good for a side project of this size.

A scraping proxy: OpenVPN proxy

Since we are web-scraping, it is certain our requests might be blocked. To circumvent this I routed the requests through a proxy. I used a Privoxy-OpenVPN docker image where the scraper worker could route its requests through. This is fairly straightforward to setup, one just needs to have a proxy it can use (there are quite a lot online). Typically the free proxies are blocked very quickly, so I recommend going the extra mile and paying for a good service.

Infrastructure

The reverse proxy: Traefik proxy

Traefik is a reverse proxy built for micro-services in mind, with lots of features designed for a container environment. I think it is the best reverse proxy for django for three reasons. First, it is very easy to configure: it is a single line of code to add and block services from being public or to force incoming connections to use HTTPS. Also, it is configurable from the project’s docker-compose file, meaning git tracking and one less file. But the biggest reason I chose it is because it has automatic SSL certificate generation capabilities.

However, there are other alternatives for reverse proxies that I would like to present:

  • nginx a reverse proxy that is also capable of serving static files, but lacks automatic SSL certificate generation out of the box
  • caddy, a reverse proxy that can serve static files AND automatically generate static files, but is not configurable from docker

The biggest drawback of using traefik with django is that you need to rely on another static file provider, but I will get on that later.

The database: PostgreSQL

The relational database is the unknown hero of any architecture because it stores its most precious resource: information. Django’s ORM was built with PostresSQL in mind, so it was a clear choice for the database. There are other choices, like Sqlite or MySQL, but I do not think they are worth the hassle. However, the deciding factor for me is that Django’s text search takes advantage of Postgre’s features, so I do not need to reinvent the wheel for something boring as text search.

Piece of advice however: some features are not enabled in postgres by default (like the TrigramExtension).

The static file server: Nginx

Django or Traefik are not suitable to serve static files, so I needed a solution. What are static files in the first place? Static files are files that are not server-generated, like images or css. Because of their nature, we can optimize their delivery with caching or serving from a server close to the website visitor. For this, I chose nginx, since I had used it previously and it is quite easy to set up. However, I would like to present three alternatives to serve static files in another post I wrote.

The message broker: RabbitMQ

The workers need to exchange work, so a broker is needed. Rabbitmq provides a common intermediary for them to communicate. It is perfect for this job because it is built with resiliency and fault tolerance in mind. For example, if one of workers goes down, the job does not disappear and is sent to another. Or it is very easy to scale the workers because they can start consuming the workload right away. Lastly, it is very easy to set up with docker and celery supports it out of the box, meaning I do need to reinvent the wheel..

Gitlab repos

Right now I’m still cleaning up the code, but I’ll share the django, scraper and docker-compose repositories. Stay tuned!

  1. How to architect a django website for the real world?
  2. Monitoring your django site: how to and first steps
  3. How to add privacy friendly analytics to your Django website

Conclusion

That was it, thank you for your attention! I spent some time discussing the “code-related” components of a promotion tracking website and I hope this was useful.

In the next posts I will present some other very important pieces of my website. While the visitor does not interact with them, they make my life easier and the project more interesting.

Django is a very flexible framework that presents use with many options, so I showed you my way of doing things. Feel free to share your way!

You might like

The four values of a participatory decision-making process

  1. The importance of a participatory decision-making process
  2. The four values of a participatory decision-making process

In the first post of the series, we talked about why have a participatory decision-making process is so important, and the value that it can bring. However, it can be quick difficult to define what that is. Fortunately, there is a set of four values that must present in each discussion or meeting or whatever the decision-making is. They are:

The four values of a participatory decision-making process are

  1. Full participation
  2. Mutual understanding
  3. Inclusive solution
  4. Shared responsibility

An in-depth analysis of the four values:

First value: Full participation

  • Participants are comfortable in sharing uncomfortable ideas and their first draft ideas
  • Participants are encouraging each other to think like that

The facilitators role in achieving full participation

  • Protect against injunctions against thinking in public (e.g. Can we go back to the topic, we are diverging, didn’t we already discuss this)
  • Build a respectful and supportive atmosphere
  • Protect against self-censorship

Second value: Mutual understanding

  • The participants understand and accept the legitimacy of the others needs and goals
  • The participants think from each other’s point’s of view
  • Dialogue is more important than persuasion
  • The participants take time to understand everyone’s perspectives

The facilitators role in achieving mutual understanding

  • Prevent the “I really can’t focus on what you are saying until I feel you understand what is my points of view” mentality
  • Help everyone realize the value of thinking from each others point of view
  • Always be impartial and honor the points of view of everyone involved. This ways, every member has the feeling that someone understands them

Third value: Inclusive solutions

  • Solutions emerge from the integrations of everyone’s perspectives, needs and goals
  • Everyone has a piece of truth
  • The solutions are not compromises, as they work for everyone involved
  • They might require the discovery of a new option
  • Innovation and sustainability of a solution are more important than the decision being expedient

The facilitators role in achieving inclusive solutions

  • Help the group find innovative ideas that result from using everyone’s point of view
  • Help the group engage in divergent thinking
  • And then build a shared framework of understanding in the groan zone
  • To, at last, converge with sound decisions

Fourth: Shared responsibility

  • Everyone is an owner of the outcome
  • Everyone is responsible for running the meeting: settings the goals, the agenda, the priorities and arriving at conclusions
  • Members must be able to implement the proposals that they endorse
  • The problem: the group relies on authority. This makes the leaders to “get on with it” and to the work themselves

The facilitators role in achieving shared responsibility

The facilitator helps the group build assertiveness, collaboration and the ownership of the decision process and outcomes.

  1. The importance of a participatory decision-making process
  2. The four values of a participatory decision-making process

The importance of a participatory decision-making process

I have been reading “The Facilitator’s Guide to Participatory Decision-Making” by Sam Kaner. The objective of this series is to write down some notes for me, and provide content that can be found by others, to assert if the book should be bought (so for, the answer is a resounding yes!).

  1. The importance of a participatory decision-making process
  2. The four values of a participatory decision-making process

Facilitator’s Guide to Participatory Decision-Making

Sam Kaner

Well, what is this book about?

Decision making is everywhere, from small things like how to divide tasks for an afternoon project to large multi-month enterprises with several stakeholders involved and many workers. Therefore making sound decisions is super important: it will lead to better products, less time wasted. Specially, it wil lead to better morale, as everyone feels that their needs are being heard.

Participatory decision-making

A participatory decision-making process means that every one that is touched by a decision is heard and participates in it. They suggest root causes, share their concerns, suggest solutions and implement the solution. Not only that, but they participate in the decision process itself: run meetings, prepare the agenda, etc.

Why is this so important? Well,

If people do not participate in the decision-making process, that decision will fail with misunderstood ideas and a half-hearted implementation

The diamond of participatory decision-making

Diamond of Participatory Decision-Making. Developed by Sam Kaner
Diamond of Participatory Decision-Making. It was developed by Sam Kaner with Lenny Lind, Catherine Toldi, Sarah Fisk and Duane Berger

This diagram summarizes the dynamics of group thinking. First, the group discusses a new topic as “Business as usual”. The participants stay in their comfort zone and make safe suggestions. Some times this is enough for a simple problem, and the meeting ends there. Many times it is not.


Then, the group moves into divergent thinking. It starts generating alternatives and exploring different points of view. It is important that its members feel safe to share novel ideas, without fear of judgement.

This will lead to the groan zone, where the most discomfort and heated moments exist. The group processes all the ideas created in the divergent zone to start building a shared framework of understanding. What is the shared framework of understanding? It is a “state” where the group is aware of the individual’s concerns, points of view and suggestions. Everyone shares the same level of understanding of the problem.


When this happens the group can start to converge. It summarizes key points, judges the ideas and evaluates alternatives. Hopefully, this leads to a decision without any compromise, where each affected party has its problems addressed.

Divergent thinking

  • Generating alternatives
  • Free flowing open discussion
  • Gathering different points of view
  • No judgement

Groan zone

  • Understanding foreign and complex ideas
  • Build a shared framework of understanding
  • The confusion moment

Convergent thinking

  • Evaluating alternatives
  • Summarizing key points
  • Sorting ideas into categories
  • Exercising judgement

Last but not least, there four values adjacent to all this process. They are fundamental in ensuring that the participatory part of the decision-making process happens. They are:

  • Full participation
  • Mutual understanding
  • Inclusive solution
  • Shared responsibility

There is an entire post on this series dedicated to dissecting them.

The facilitator role in participatory decision-making

Well, that explains the second part of “The Facilitator’s Guide to Participatory Decision-Making“, what about the facilitator? Who is that guy?

In short:

The facilitator supports everyone at doing their best thinking

The facilitator is a servant leader that ensures everyone is heard and feels safe to share their opinions. Lastly, the facilitator guides the group through the diamond of participatory decision making.

There are some “smells” that prevent traditional groups from reaching perfect solutions:

  • Fixed positions
  • Win/lose mentality
  • Self-censorship
  • Reliance on authority

It is the facilitator’s role to prevent them from creeping up and undermining the meetings. These smells are explained in a post dedicated to the values of participatory decision making.

To do its best job, the facilitator must have:

  • Content neutrality – he does not have a position in one of the discussion sides.
  • Does not have a position in the outcome – he does not benefit if a certain decision is made.
  • Does not advocate for certain processes – the group is responsible to choose how they decide things.

In short, the facilitator must be independent and act from outside any of the group’s individual interests.

During the discussion, the facilitator:

  • Builds and sustains a supportive atmosphere
  • Stays out of the content and respects the process
  • Teaches the group new thinking skills
  1. The importance of a participatory decision-making process
  2. The four values of a participatory decision-making process

Create a custom django prometheus metric

Today I’ll show you how to create your custom prometheus metric for your django application. Prometheus is an awesome tool to monitor our stack, specially with its integration with grafana and its cool dashboards.

The example metric is for a recent use case. I wanted to measure my website visitors favourite pages, and where they came from.

I wanted to achieve this without tracking, cookies, paid services or having to use Google Analytics. Since I was already using a Prometheus/ Grafana suite to have visibility of my platform health, I decided to use it for this goal as well.

This was the end result, on grafana:

A custom django prometheus metric on grafana

First step: having django-prometheus up and running!

Just follow the instructions on the project readme, they should be just:

pip install django-prometheus

Then, on settings.py

INSTALLED_APPS = [
   ...
   'django_prometheus',
   ...
]

MIDDLEWARE = [
    'django_prometheus.middleware.PrometheusBeforeMiddleware',
    # All your other middlewares go here, including the default
    # middlewares like SessionMiddleware, CommonMiddleware,
    # CsrfViewmiddleware, SecurityMiddleware, etc.
    'django_prometheus.middleware.PrometheusAfterMiddleware',
]

And lastly, on your urls.py:

urlpatterns = [
    ...
    url('', include('django_prometheus.urls')),
]

Also, if you are deploying your code using gunicorn or similar(and you should if you are deploying to a production environment!), pay attention. You need to declare multiple ports since the workers could block each other while trying to service the Prometheus scrape request. This is also very well explained in the django-Prometheus documentation.

PROMETHEUS_METRICS_EXPORT_PORT_RANGE = range(8001, 8050)

Second step: adding our custom django Prometheus metric

This is the actual first step in creating our custom prometheus metric on our django app. register the new metric.

Now it might be a good time to refer to the official Prometheus documentation about metrics. After a good read, we can register it:

Our custom metric will count the number of requests by view, path and referer.

The general idea is to first create a new Metrics class, that inherits from django_prometheus.Metrics. On that we will register own actual metric:

  1. we want it to be a Counter, because it will be increasing over time.
  2. we will unoriginally name it django_http_requests_total_by_view_path_referer,
  3. and add a description that will be useful for documentation purposes.
  4. lastly, our metric will have three labels: view, path and referer so that we can group and filter by them.

Last but not least, do not forget to call the parent method so that we register the remaining metrics.

from django_prometheus.conf import NAMESPACE
from django_prometheus.middleware import Metrics
from prometheus_client import Counter

class CustomMetrics(Metrics):
    def register(self):
        self.requests_total_by_view_path_referer = self.register_metric(
            Counter,
            "django_http_requests_total_by_view_path_referer",
            "Count of requests by view, path, referer.",
            ["view", "path", "referer"],
            namespace=NAMESPACE,
        )
        return super().register()

Third step: creating our custom PrometheusMiddleware

Now that we have our shiny metric, we need to measure it. To do so, we will create two classes that inherit from PrometheusBeforeMiddleware and PrometheusBeforeMiddleware – they will contain our actual metric collection logic.

PrometheusAfterMiddleWare has a bunch of methods where we can add our logic. These methods are called on different moments of the request/response lifecycle, and so receive different parameters:

  • For example, process_exception() receives the request and exception objects,
  • process_response receives() request and response object,
  • the process_view method() receives the request and view related objects.

Take a look at the django-prometheus existing code to figure out the best place to put your metric, putting it closer to similar metrics. For this case process_response() seemed like a good candidate because the view name has been resolved by then.

from django_prometheus.middleware import Metrics, PrometheusBeforeMiddleware, PrometheusAfterMiddleware

...

class AppMetricsBeforeMiddleware(PrometheusBeforeMiddleware):
    metrics_cls = CustomMetrics


class AppMetricsAfterMiddleware(PrometheusAfterMiddleware):
    metrics_cls = CustomMetrics

    def _get_referer_name(self, request):
        referer_name = "<unnamed referer>"
        if hasattr(request, "META"):
            if request.META is not None:
                if request.META.get("HTTP_REFERER") is not None:
                    referer_name = request.META.get("HTTP_REFERER")
        return referer_name

    def process_response(self, request, response):

      self.label_metric(self.metrics.requests_total_by_view_path_referer,
                        request,
                        view=self._get_view_name(request),
                        path=request.path,
                        referer=self._get_referer_name(request)).inc()

        return super().process_response(request, response)

The actual implementation is very simple:

  1. Label the metric with the view name, the path, and referer values:
    1. To get the view we use the django-prometheus built-in _get_view_name function,
    2. To get the referer we adapt the aforementioned function’s logic to get the referer field from the HTTP request header dict, and lastly,
    3. We get the path from the django request object.
  2. We finally increment the resulting metric
  3. Call the parent method to get all the other metrics.

Last step: plugging everything together

This is the easiest step! We just need to replace django-prometheus middleware with our custom middleware in django settings!

Assuming your custom Middleware is in DjangoSite/MyApp/custom_metrics.py:

MIDDLEWARE = [
    'MyApp.custom_metrics.AppMetricsBeforeMiddleware',
    # .. all other middleware
    'MyApp.custom_metrics.AppMetricsAfterMiddleware'
]

If you go to your /metrics endpoint you should see it:

Custom prometheus metric exported from django

Then is can be exported to prometheus so we can do all the cool things with grafana:

A custom django prometheus metric on grafana

Wrapping up

Although it has some tricks, creating your custom Prometheus metric for your Django application is not very difficult.

For my web analytics use-case, it would be trivial to get the user-agent field to know how my visitors are reading my site, if they are registered users or not, etc.

You might like

A list of the top 5 dokuwiki plugins for 2021

Dokuwiki is a very simple wiki platform that you can self-host. It provides just the right balance between features and complexity. With an ecosystem of plugins, it allows you to customize your wiki to your taste. I’ve searched this ecosystem to personalize mine to the extent where I can use it without battling its rigid “wiki nature”.

Today I want to share some of the plugins that I have used on my personal wiki.


This was the first plugin that I installed— it enables the formatting of the text using markdown rather than the wiki syntax – which makes for a much more enjoying writing experience in general. Note that you can use the two interchangeably, so no dokuwiki formatting feature is lost.

Add new page

https://www.dokuwiki.org/plugin:addnewpage

The default way of creating a new page in DokuWiki is to create a new link from an already existing page — while this ensures that there are no orphans pages, this is cumbersome and not ergonomic. This plugin adds a form that enables you to quickly create a new page.

The risk of creating an orphan page is mitigated with the IndexMenu.

The end result
The dowuki code in the document

IndexMenu

https://www.dokuwiki.org/plugin:indexmenu

The third plugin on this collection creates a list of all the pages under a namespace — with many options for you to customize the filtering and display.

I use it on my main page to have a listing of all the namespaces and pages on my wiki. This makes it super easy to find what I am looking for, and ensures no page goes missing. I also list my most used namespaces (my work notes and programming snippets) there, so I can quickly jump to the info I’m looking for.

There is a interactive option that is useful for large listing, list this one of the entire site
It is simple to dynamically list all the pages under a namespace

Code used to generate the previous lists:

## Tech
// index everything under the tech namespace
{{indexmenu>:tech}}


## Site tree
// index everything on the root namespace using the javascript version
{{indexmenu>:|js}}

* note that I have using markdown for the heading, and not wiki syntax

Move plugin

https://www.dokuwiki.org/plugin:move

You should not be moving your pages manually, it is super tedious and this plugin does it automatically. To fix this, the move plugin painlessly moves and renames pages and namespaces — it then updates all the needed backlinks.
A last note: I’ve had some problems with using this plugin my phone with responsive templates,it only works with the wiki in desktop mode.

Tag

https://www.dokuwiki.org/plugin:tag

The last plugin I want to share breaks the traditional hierarchy of parent and child pages. This plugin lets you assign category tags to wiki pages — which can then be listed and you can find all related pages.

A very interesting use I found was to create a “pinned” tag that I then filter by on my start page. This way I have my most used pages in one place.

A list of all the posts with the tag pinne`
Pinned pages
// shortcut
{{topic>pinned}}
Adding the tag pinned relates this post to the others with the same tag

That was it! Thank you for reading, I hope you enjoying this list!