Blog

Uptime monitor in Elixir & Phoenix: Data gathering

After routing and controllers, let us take a few looks at data gathering in an Elixir/Phoenix application. In the fifth part of our website uptime monitor tutorial, you'll learn about the available data gathering options in the Phoenix framework, how your application can benefit from Quantum, and testing.

Welcome to the fifth part of the article series where we are creating an uptime monitor in Elixir. The previous article concentrated on the topic of routing and controllers in Phoenix. Today we are going to create data gathering functionality for our application. As Elixir is built for such tasks this should be relatively easy.

New to the Elixir/Phoenix uptime monitor series? Start with the project setup!

Table of content

  1. The task
  2. Data gathering options in an Elixir/Phoenix application
  3. Adding Quantum to an Elixir Phoenix application
  4. Actual data gathering in Elixir/Phoenix
  5. Testing
  6. A word of conclusion

The task

Our mission today is to create a scheduler, which will periodically send requests to web pages stored in our database, and save the results, availability and response time, in the database.

Data gathering options in an Elixir/Phoenix application

As we did in previous articles, firstly we are going to discuss the possible options for solving our problem. We’ve got two main options:

  • to build a scheduler by ourselves using Elixir’s utilities
  • or to use Quantum, which is a cron-like job scheduler.

Our own solution seems kinda easy to implement. To do so, we could use the GenServer utility, which would recall itself once in a while, spawning a task, which would handle the requests. But in the first place, we would have to implement the whole module using GenServer by ourselves. It takes time. Also, personally, I just don’t feel like writing things that were already written by someone, as long as I understand how they work.

This is why we are going to use Quantum, a cron-like job scheduler for Elixir. Underneath it just handles given functions as separate processes at scheduled times. The schedule times can be set in its configuration.

Adding Quantum to an Elixir Phoenix application

To add Quantum to our project, we have to add it to the dependencies. Dependencies are just Elixir/Phoenix libraries, an external code, your application will utilize. They are used when you need some functionality, but you don’t want to code it by yourself. We call them dependencies because the code we are writing depends on them to work properly.

Usually using an dependency is a good thing, because someone spent a lot of time creating it and usually he is knowing the topic better than we would. To add a dependency, navigate to the /apps/e_meter/mix.exs file and find the deps/0 function inside our Project module. Then add {:quantum, "~> 3.0"} to our dependencies.

defmodule EMeter.MixProject do
 use Mix.Project
...
 defp deps do
   [
     {:bcrypt_elixir, "~> 2.0"},
     {:phoenix_pubsub, "~> 2.0"},
     {:quantum, "~> 3.0"},
     {:postgres, in_umbrella: true}
   ]
 end
...
end

The next thing is to get the actual dependencies. To do so, let’s launch the mix deps.get command in our main project folder. You should notice that Quantum has been installed.

Then we have to create a scheduler for our application. Navigate to the /apps/e_meter/lib/e_meter folder and create a cheduler.ex file there. In the file define an EMeter.Scheduler module, which will use Quantum functions.

defmodule EMeter.Scheduler do
 use Quantum, otp_app: :e_meter
end

Another step in the Quantum configuration is to add the created scheduler to the application’s supervision tree. We achieve it by adding EMeter.Scheduler to the application children in the EMeter.Application’s start/2 function.

defmodule EMeter.Application do
 # See https://hexdocs.pm/elixir/Application.html
 # for more information on OTP Applications
 @moduledoc false

 use Application

 def start(_type, _args) do
   children = [
     # Start the PubSub system
     {Phoenix.PubSub, name: EMeter.PubSub},
     # Start a worker by calling: EMeter.Worker.start_link(arg)
     # {EMeter.Worker, arg}
     EMeter.Scheduler
   ]

   Supervisor.start_link(children, strategy: :one_for_one, name: EMeter.Supervisor)
 end
end

The last step is to add a configuration in the config.exs file.

config :e_meter, EMeter.Scheduler,
 jobs: []

And that’s it. We are ready to run cron-like jobs in our application.

Actual data gathering in Elixir/Phoenix

To gather the data, we will need some other dependencies - to call the web pages saved in our database. That is why we are going to use Tesla, which is an HTTP adapter for Elixir. To make use of it we also need to add it to the dependencies, along with some other deps required by Tesla. Let’s head again to the /apps/e_meter/mix.exs file and modify our dependencies.

defmodule EMeter.MixProject do
 use Mix.Project
...
 defp deps do
   [
     {:bcrypt_elixir, "~> 2.0"},
     {:phoenix_pubsub, "~> 2.0"},
     {:quantum, "~> 3.0"},
     {:tesla, "~> 1.4"},
     {:hackney, "~> 1.17"},
     {:jason, ">= 1.0.0"},
     {:postgres, in_umbrella: true}
   ]
 end
...
end

We added :tesla, :hackney and :jason. The first one is Tesla itself, the other ones are optional, but recommended for Tesla to work according to its documentation. The last thing to make Tesla work is adding its config to the /configs/config.exs file and run the mix deps.get command.

config :tesla, adapter: Tesla.Adapter.Hackney

Let’s head to the /apps/e_meter/lib/e_meter folder and if you don’t already have the analytics.ex file, then create it. The next step is to create a module inside that file.

defmodule EMeter.Analytics do

end

In our Analytics module, we will handle all the functions that are related to the analytics context of our application.

Firstly create a function that will handle websites data gathering. We call it analyze_websties/0. We also alias EMeter.Sites and EMeter.Analytics.Measurement and attach the Postgres.Repo as a module attribute under the @repo name.

defmodule EMeter.Analytics do
 alias EMeter.Sites
 alias EMeter.Analytics.Measurement

 @repo Postgres.Repo

 def analyze_websites() do

 end
end

Next, let’s fetch websites to be measured. To do so, we create a function called fetch_all_websites() inside the EMeter.Sites module. We have chosen the EMeter.Sites module so the context matches the functionality of our implementation. In this function, we just simply get all the Websites from the database.

defmodule EMeter.Sites do

 ...

 def fetch_all_websites() do
   @repo.all(Website)
 end
end

With a function serving websites to be measured ready, we can now make use of it. Our mission is to transform fetched websites into measurements. This is why we are going to use Enum.map/2, which launches a given function on each element of the enumerable and returns a list of results. It seems like the best fit for us right now.

We also create a function called measure_website_performance/1, which will create a measurement for a given website and save it to the database. For each of the websites, we are going to spawn a separate task with Task.async/1, which will make use of measure_website_performance/1.

The reason we launch it parallel is to simply make it faster. Instead of waiting for each request and saving it synchronically, we are going to leverage parallelism. It can be quite a payload for our internet connection with a greater number of websites to measure, but let’s assume we have a server with a connection powerful enough to handle such tasks.

defmodule EMeter.Analytics do
 alias EMeter.Sites
 alias EMeter.Analytics.Measurement

 @repo Postgres.Repo

 def analyze_websites() do
   measurement_tasks =
     Sites.fetch_all_websites()
     |> Enum.map(fn website ->
       Task.async(fn -> measure_website_performance(website) end)
     end)

   Task.await_many(measurement_tasks)
 end

 defp measure_website_performance(%{url: url, id: id} = _website) do

 end
end

The final step is to make the actual measurement in the measure_website_performance/1. Let’s use Tesla.get/1 and call the given website URL within the :timer.tc/2 function. It evaluates a given function and measures the real elapsed time in microseconds. It returns a tuple {Time, Value}, where Time is elapsed time and Value is a function result.

defmodule EMeter.Analytics do

 ...

 defp measure_website_performance(%{url: url, id: id} = _website) do
   {response_time, {status, response}} = :timer.tc(&Tesla.get/1, [url])

 end
end

Now we have to prepare the measurement’s result in a way that is compatible with our Measurement schema. To do so, we just create a simple case do statement, deciding whether it should be a responsive or unresponsive measurement, and preparing the data based on that.

At the end of the function, we insert given data to the database with a bang insert function. It should fail loudly once something is wrong. If everything works, we return :ok.

defmodule EMeter.Analytics do
 ...

 defp measure_website_performance(%{url: url, id: id} = _website) do
   {response_time, {status, response}} = :timer.tc(&Tesla.get/1, [url])

   measurement_params =
     case status do
       :ok ->
         %{
           response_time: round(response_time / 1000),
           status_code: "#{response.status}",
           website_id: id
         }

       :error ->
         %{
           response_time: 999_999,
           status_code: "unknown",
           website_id: id
         }
     end

   %Measurement{}
   |> Measurement.changeset(measurement_params)
   |> @repo.insert!()

   :ok
 end
end

The one last thing to make it work, is adding the job in the quantum config. To do so, navigate to the config.exs file and add a line responsible for launching our analyzer.

config :e_meter, EMeter.Scheduler,
 jobs: [
   {"* * * * *", {EMeter.Analytics, :analyze_websites, []}}
 ]

Testing

Since the website's response time analyzer is our business logic, it is really important to test it properly. Phoenix comes with the ExUnit dependency shipped within. ExUnit is a unit testing framework for Elixir, which is really easy to use and yet powerful.

To make use of it, we make a new file analytics_test.exs in the apps/e_meter/test/e_meter folder and create an EMeter.AnalyticsTest module there. We will use the EMeter.DataCase module, which will inject the basic imports and aliases into our test module. We also alias basic modules, which will be used in our tests, i.e.: Analytics, Website and Measurement. At the end, we also import the Emeter.AccountsFixtures module to make it easier to set up users for tests.

defmodule EMeter.AnalyticsTest do
 use EMeter.DataCase

 alias EMeter.Analytics
 alias EMeter.Sites.Website
 alias EMeter.Analytics.Measurement

 import EMeter.AccountsFixtures

 ...
end

Before performing the tests we might need some setup data. This is where the setup and setup_all functions from ExUnit.Callbacks module come into play. These are allowing us to make some fixtures before tests and thus bring the system to a known state.

setup is invoked before each test, while setup_all is invoked once per the entire module. A module can have multiple setups, and they are launched in the order of appearance in a given module. If a setup returns a keyword list, map, or a tuple in a form of {:ok, map() | keyword()}, the keyword list or the map will be merged into the current context. Returning just in the setup :ok won’t change the context.

defmodule EMeter.AnalyticsTest do
 use EMeter.DataCase

 alias EMeter.Analytics
 alias EMeter.Sites.Website
 alias EMeter.Analytics.Measurement

 import EMeter.AccountsFixtures

 setup do
   user = user_fixture()

   %{
     user: user,
     websites: insert_user_websites(user)
   }
 end
 ...
end

In our case, we are returning a map with two keys: user and :websites of a given user. To create a user, we use the user_fixture() function from the AccountFixtures module, which is already imported into our test module. To create websites of this user, we need to create a function, which will set them up for us. We name it insert_user_websites(). It simply takes 3 random websites and inserts them into the database. It should fail loudly once any of the websites cannot be inserted. And this is why we are using a bang function.

defmodule EMeter.AnalyticsTest do
 ...

 defp insert_user_websites(%{id: user_id}) do
   for url <- [
         "https://www.google.com/",
         "https://www.facebook.com/",
         "https://www.instagram.com/"
       ] do
     params = %{
       url: url,
       user_id: user_id
     }

     %Website{}
     |> Website.changeset(params)
     |> Repo.insert!()
   end
 end
end

To group our tests, we are going to use the describe function. Each describe is being named and its name is used as the prefix for tests inside it. It is also nice to know that you can run tests only from a given describe by running a command with the right flags, as in the example below.

mix test --only describe:”Described name”

Let’s describe our tests “analyze_websites/0”, which is the tested function name and its arity.

defmodule EMeter.AnalyticsTest do
 ...

 describe "analyze_websites/0" do

 end
 ...
end

Finally, we are going to write the test of our functionality. To do so, we use the test function. Firstly, we have to write a test message that will be displayed once the test fails. It should be as informative as it can be when it comes to a given test case. Its main purpose is to inform what should happen in the test.

defmodule EMeter.AnalyticsTest do
 ...

 describe "analyze_websites/0" do
   test "with the websites in the database it runs the analyzer and saves results", %{websites: websites} do

   end
 end

 ...
end

Now we can launch our analyzer. We know that there are 3 websites. That is why we can assume that there will also be 3 measurements. analyze_websites/0 will return the list of 3 :ok’s.

To ensure it happens we are going to use the assert function from the ExUnit.Assertions module. This function ensures that its argument is a true value. Since we know what the result should look like, we can use the == comparison operator to check if the result meets the expectations.

We can also withdraw all the Measurements from the database and check if they have the required fields. To do so, we are going to launch assert functions for each of the withdrawn measurements with Enum.each/2. For each of the measurements, we will ensure that it has the required fields and that its website id is valid.

defmodule EMeter.AnalyticsTest do
 ...

 describe "analyze_websites/0" do
   test "with the websites in the database it runs the analyzer and saves results", %{websites: websites} do
     result = Analytics.analyze_websites()

     measurements = Repo.all(Measurement)
     websites_ids = Enum.map(fn website -> website.id end)

     assert result == [:ok, :ok, :ok]
     Enum.each(measurements, fn measurement ->
       assert measurement.response_time != nil
       assert measurement.status_code != nil
       assert Enum.member?(websites_ids, measurement.website_id)
     end)
   end
 end

 ...
end

Let’s now launch the written tests with the mix test command. You should see something similar to this in your console.

mix test
==> e_meter
Compiling 2 files (.ex)
==> postgres
..

Finished in 0.02 seconds (0.00s async, 0.02s sync)
1 doctest, 1 test, 0 failures

Randomized with seed 553327
==> e_meter
....................................................

Finished in 1.3 seconds (0.00s async, 1.3s sync)
52 tests, 0 failures

Randomized with seed 553327
==> e_meter_web
.....................................................

Finished in 0.3 seconds (0.3s async, 0.00s sync)
53 tests, 0 failures

Randomized with seed 553327

A word of conclusion

In this article, we learned about data gathering via Elixir workers and testing the business logic. Our application is taking shape. You can launch it, add some websites and gather their response times.

From now on, the articles will be more focused on the user side of our application. The next article will be focused on the topic of preparing the data to be served.

Any kind of feedback is appreciated!

Check our latest product - it's based on our experience of managing over 50-people strong company. The tool we're missing as a small company and not an enterprise.

humadroid.io is an employee and performance management software. It's an unique tool allowing everyone to be in the loop - by having up to date info about co-workers, time-off, benefits, assets, helping with one-on-ones, being a go-to place for company-wide announcements.

Check out humadroid.io
Top

Contact us

* Required fields

The controller of your personal data provided via this contact form is Prograils sp. z o.o., with a registered seat at Sczanieckiej 9A/10, 60-215 Poznań. Your personal data will be processed in order to respond to your inquiries and for our marketing purposes (e.g. when you ask us for our post-development, maintenance or ad hoc engagements for your app). You have the rights to: access your personal data, rectify or erase your personal data, restrict the processing of your personal data, data portability and to object to the processing of your personal data. Learn more.

Notice

We do not track you online. We use only session cookies and anonymous identifiers for the purposes specified in the cookie policy. No third-party trackers.

I understand
Elo Mordo!Elo Mordo!