Blog

Uptime monitor in Elixir & Phoenix: Serving the data

Elixir/Phoenix uptime monitor tutorial vol. 6. This time you'll learn how to utilize Ecto functions for data serving. Curious? Let's start!

Table of contents

  1. Elixir/Phoenix uptime monitor: Extracting data with Ecto functions
  2. Setting up the query
  3. Grouping and aggregating the results
  4. Creating the “day” match
  5. Creating other matches
  6. Setting up the endpoint
  7. Homework
  8. Elixir/Phoenix uptime monitor data serving: A word of conclusion

The previous article concentrated on the topic of data gathering in Elixir. Today, after learning some basics of parallelism, we are going to create the data serving functions for our application. To do so, we are going to try to leverage the Ecto functions.

Elixir/Phoenix uptime monitor: Extracting data with Ecto functions

Our mission today is to create the calls to the database, which will take the aggregated data out and serve it to our templates. We are going to use Ecto functions for that.

Setting up the query

Firstly let’s navigate to the EMeter.Analytics module. In the previous article, we created a function called analyze_websites/0 inside, which is being called periodically and creates our measurements. Now we have to get those measurements, group and aggregate them. It seems like quite a challenge. It also should take the timespan and scale into consideration, as we stated in the first article on an Elixir/Phoenix project setup.

Firstly create a get_results/3 function, which will take website_id, timespan and scale as arguments. Its purpose is to return the measurements of a website with a given id and within a given timespan. To do so, we will have to import the Ecto.Query module, which provides a query DSL (domain-specific language). Queries in general are used to get and manipulate data from a repository. For more specific information I encourage you to go at least briefly through the Ecto.Query documentation.

defmodule EMeter.Analytics do
 import Ecto.Query
 ...

 def get_results(website_id, timespan, scale) do
   ...
 end
end

With the needed import and the function definition, we can start coding our query. Firstly we have to specify the first query argument. Since we are building a query expression, it should be a value that implements Ecto.Queryable protocol, which means it can be converted into the Ecto.Query. In our case, it is Measurement - which is queryable, because it uses Ecto.Schema and defines a schema inside.

defmodule EMeter.Analytics do
 import Ecto.Query
 ...

 def get_results(website_id, timespan, scale) do
   Measurement
 end
end

Now we will use the basic where/3 macro from Ecto.Query, which takes queryable, binding and expression as arguments. Its purpose is to filter the results set according to the expression. We filter the results in the way they match website_id and are not unknown when it comes to their status code.

defmodule EMeter.Analytics do
 import Ecto.Query
 ...

 def get_results(website_id, timespan, scale) do
   Measurement
   |> where([m], m.website_id == ^website_id)
   |> where([m], m.status_code != "unknown")
 end
end

You might notice that in the where/3 the variables that are compared are preceded with a pin operator, which is “^”. To understand it, we have to dig a bit deeper into Ecto.

It relies on macros, which provide a powerful DSL for us to use. In the end, this DSL is translated into the SQL query. Actually “^” isn’t a pin operator. Instead, it is a caret, which indicates the interpolation in Ecto. Without it, the website_id would be taken literally, which means it would be inserted directly into the generated SQL.

The next task is to filter the results according to the specified timespan. To do so, we create the filter_by_timespan/2 function, which takes the query and timespan as arguments. The first thing is to find the past date. We bind the value to the past_date with a match operator. We use the module attribute @day_in_seconds which specifies the number of seconds in a single day and basic NaiveDateTime functions. Then we use another where/3 macro, which filters out measurements older than our date.

defmodule EMeter.Analytics do
 import Ecto.Query
 ...

 @day_in_seconds 24 * 60 * 60

 ...

 def get_results(website_id, timespan, scale) do
   Measurement
   |> where([m], m.website_id == ^website_id)
   |> where([m], m.status_code != "unknown")
   |> filter_by_timespan(timespan)
 end

 defp filter_by_timespan(query, timespan) do
   past_date =
     NaiveDateTime.utc_now()
     |> NaiveDateTime.add(-1 * timespan * @day_in_seconds, :second)

   where(query, [m], m.inserted_at > ^past_date)
 end
end

Grouping and aggregating the results

Our next task is to group and aggregate the results. Since the scale will vary and depending on its value we want a proper query behavior we need a new function. Name it group_by_scale(query, timespan). In this function, we will pattern match on the timespan. We need 4 matches: “week”, “day”, “hour” and “minute”. We don’t create a default match, because once this function is called with a different scale value, we want it to raise loudly.


defmodule EMeter.Analytics do
 import Ecto.Query
 ...

 defp group_by_scale(query, "week") do
   ...
 end

 defp group_by_scale(query, "day") do
   ...
 end

 defp group_by_scale(query, "hour") do
   ...
 end

 defp group_by_scale(query, "minute") do
   ...
 end
end

Creating the “day” match

We are going to create the match for a “day” first.

The other ones will be analogous. We want the daily average response times from the input query.

The first step is to group measurements into the sets of daily results. To do so, we use the group_by/3 macro, which groups together the rows from a schema that have the same values in given fields. However, Ecto has no support for grouping based on given periods.

This is where the fragment/2 comes into play. It sends the fragments of code directly to the database and the question marks are interpreted as arguments. In the fragment, we use Postgres function date_trunc(text, timestamp), which truncates the date to a specified precision.

The best part of it is that the timestamp format doesn’t change, which means we don’t have to worry about, for example, the repeating days of the week. With this function, we can achieve the condition needed for group_by/3 to work, which is - same values.

defmodule EMeter.Analytics do
 import Ecto.Query
 ...

 defp group_by_scale(query, "day") do
   query
   |> group_by([m], fragment("(date_trunc('day', ?))", m.inserted_at))
 end

 ...

end

Next, we pipe the result to the select/3 macro, which specifies fields that should be selected from our repository and what transformations should be made on those fields. It takes the result of fragment/2 and also calculates the average response time in a specified timespan with an avg/1 aggregate. It takes the results out in the form of a map with two keys: timespan and avg.

defmodule EMeter.Analytics do
 import Ecto.Query
 ...

 defp group_by_scale(query, "day") do
   query
   |> group_by([m], fragment("(date_trunc('day', ?))", m.inserted_at))
   |> select([m], %{timespan: fragment("(date_trunc('day', ?))", m.inserted_at), avg: avg(m.response_time)})
 end

 ...

end

Creating other matches

Now, analogous to the “day” match, we create all the other matches. The only difference is in the “minute” match. It doesn’t need to be truncated, because the tests are being performed every minute.

defmodule EMeter.Analytics do
 import Ecto.Query
 ...

 defp group_by_scale(query, "week") do
   query
   |> group_by([m], fragment("(date_trunc('week', ?))", m.inserted_at))
   |> select([m], %{timespan: fragment("(date_trunc('week', ?))", m.inserted_at), avg: avg(m.response_time)})
 end

 defp group_by_scale(query, "day") do
   query
   |> group_by([m], fragment("(date_trunc('day', ?))", m.inserted_at))
   |> select([m], %{timespan: fragment("(date_trunc('day', ?))", m.inserted_at), avg: avg(m.response_time)})
 end

 defp group_by_scale(query, "hour") do
   query
   |> group_by([m], fragment("(date_trunc('hour', ?))", m.inserted_at))
   |> select([m], %{timespan: fragment("(date_trunc('hour', ?))", m.inserted_at), avg: avg(m.response_time)})
 end

 defp group_by_scale(query, "minute") do
   select(query, [m], %{timespan: m.inserted_at, avg: m.response_time})
 end
end

And that’s it! The last thing is to call Repo.all/2, which will take our results out of the repository. We can test our functionality via the iex -S mix phx.server.

I’ve already added a website in the past. But you may need to add one manually for our testing purposes. To do so use EMeter.Sites.create_website/1 function.

iex(13)> EMeter.Sites.Website |> Postgres.Repo.all
[
  %EMeter.Sites.Website{
    __meta__: #Ecto.Schema.Metadata<:loaded, "websites">,
    id: "dbbb6a15-6bf8-4896-852d-1e9200c87f9e",
    inserted_at: ~N[2021-11-02 11:46:26],
    updated_at: ~N[2021-11-02 11:46:26],
    url: "www.google.pl",
    user: #Ecto.Association.NotLoaded<association :user is not loaded>,
    user_id: "c7e08e8e-9eee-476e-a022-c1150aa7d938"
  }
]

Now you can take the website id and launch the EMeter.Analytics.get_results/3. In this call, I demanded the measurements from the last day with a scale of every hour.

iex(18)> EMeter.Analytics.get_results(website_id, 1, "hour")
[
  %{
    avg: #Decimal<266.7272727272727273>,
    timespan: ~N[2021-11-05 10:00:00.000000]
  },
  %{
    avg: #Decimal<244.6833333333333333>,
    timespan: ~N[2021-11-05 11:00:00.000000]
  },
  %{
    avg: #Decimal<297.2333333333333333>,
    timespan: ~N[2021-11-05 12:00:00.000000]
  },
  %{
    avg: #Decimal<319.7500000000000000>,
    timespan: ~N[2021-11-05 13:00:00.000000]
  }
]

Setting up the endpoint

Once we are ready with the function, which prepares data to be served, we need some endpoint, where it will be displayed. Firstly create a wrapper for our function in the EMeter.Sites module. Its purpose is to check if the user is permitted to get the data and to call the analytics function.

The function name is get_site_analytics/4 and it takes website_id, user_id, timespan and scale as parameters. Firstly it checks if there is a website with a given id belonging to a specified user. If it exists then it launches the results fetching functions, if not - then it returns an error tuple.

defmodule EMeter.Sites do
 ...

 def get_site_analytics(website_id, user_id, timespan, scale) do
   Website
   |> where([website], website.id == ^website_id)
   |> where([website], website.user_id == ^user_id)
   |> @repo.exists?()
   |> if do
     EMeter.Analytics.get_results(website_id, timespan, scale)
   else
     {:error, :not_found}
   end
 end

 ...
end

The next step is to modify the EMeterWeb.Controller and add a function, which will make use of previously written code. We name it get_site_analytics/2. It takes conn and params as its parameters. We match on user_id from assigns in conn and on id, timespan and scale on parameters.

In this function, we call the EMeter.Sites.get_site_analytics/4 function within the case statement and match the possible results. Once it fails the response should contain a flash message with an error and redirect to the sites path. With the success call, it should show the show_analytics.html template with measurement assigned to the conn.

defmodule EMeterWeb.SitesController do
 ...

 def get_site_analytics(%{assigns: %{current_user: %{id: user_id}}} = conn, %{
       "id" => website_id,
       "timespan" => timespan,
       "scale" => scale
     }) do
   case Sites.get_site_analytics(website_id, user_id, timespan, scale) do
     {:error, :not_found} ->
       conn
       |> put_flash(:error, "Website not found")
       |> redirect(to: Routes.sites_path(conn, :index))

     measurements ->
       render(conn, "show_analytics.html", measurements: measurements)
   end
 end

 ...

end

The last step is to modify the EMeter.Router module to contain a route pointing to the written function.

defmodule EMeterWeb.Router do
 ...

 scope "/", EMeterWeb do
   pipe_through [:browser, :require_authenticated_user]

   resources "/sites", SitesController, except: [:edit, :update]
   get "/get_site_analytics/:id", SitesController, :get_site_analytics

   ...
 end

 ...

end

Homework

Since in the previous article we learned about tests - now it's up to you to make use of this knowledge. There are three functions to be tested: EMeter.Sites.get_site_analytics/4, EMeter.Analytics.get_results/3 and the route /get_site_analytics/:id from the EMeterWeb.SitesController. I wish you good luck!

Elixir/Phoenix uptime monitor data serving: A word of conclusion

In this article, we learned about aggregating and serving data. Our application is really close to being in its final shape. You can launch it, add some websites and gather their response times. The data is being aggregated, grouped and served.

For now, we finished the “hard” part - most of our business logic is done. In the next articles, we are going to focus more on the Elixir templates and views. If you have any questions or you want to discuss something, let me know.

Any kind of feedback is appreciated!

Check our latest product - it's based on our experience of managing over 50-people strong company. The tool we're missing as a small company and not an enterprise.

humadroid.io is an employee and performance management software. It's an unique tool allowing everyone to be in the loop - by having up to date info about co-workers, time-off, benefits, assets, helping with one-on-ones, being a go-to place for company-wide announcements.

Check out humadroid.io
Top

Contact us

* Required fields

The controller of your personal data provided via this contact form is Prograils sp. z o.o., with a registered seat at Sczanieckiej 9A/10, 60-215 Poznań. Your personal data will be processed in order to respond to your inquiries and for our marketing purposes (e.g. when you ask us for our post-development, maintenance or ad hoc engagements for your app). You have the rights to: access your personal data, rectify or erase your personal data, restrict the processing of your personal data, data portability and to object to the processing of your personal data. Learn more.

Notice

We do not track you online. We use only session cookies and anonymous identifiers for the purposes specified in the cookie policy. No third-party trackers.

I understand
Elo Mordo!Elo Mordo!