This is part 2 in a three-part blog series. View part 1 here.
We previously left off with our server able to handle static content, but that is about all. In order to store and retrieve confessions, our app needs to interact with a database. That’s where Diesel comes to our aid!
⚠ In order for Diesel to interact with a database, a database instance needs to already exist. Make sure you have access to a Postgres instance (local or cloud based, both work) before moving forward.
Housekeeping
1. We begin with installing diesel_cli
- a tool that helps us manage the database. As we only use Diesel for Postgres, we use the features
flag to specify that:
cargo install diesel_cli --no-default-features --features postgres
2. In the root folder of the project, create a .env
file. At the top of the file add the DATABASE_URL
property that Diesel will use to get the connection details of your Postgres instance.
DATABASE_URL=postgres://<user>:<password>@<ip>/confessions
3. In the project root folder run diesel setup
. Diesel will create a new database (confessions), as well as a set of empty migrations.
Using Migrations
With the database setup, it’s time to create the confessions table. Diesel uses a concept called migrations to track changes done to a database schema. You can think of migrations as a list of actions that you either apply to the database (up.sql) or revert (down.sql).
- Generate a new migration set for the confessions table by running the following command at the root of the project:
diesel migration generate confessions_table
This creates a new folder inside of the migrations folder that holds the new migration set (up/down.sql) for the confessions
table:
2. To create the new table, cd to the new migration folder and add the following to the up.sql file:
-- Your SQL goes here
CREATE TABLE confessions (
id SERIAL PRIMARY KEY,
confession VARCHAR NOT NULL
)
3. In down.sql we specify how to revert the migration (i.e., dropping the confessions table):
-- This file should undo anything in `up.sql`
DROP TABLE confessions
4. Apply the new migration by running the following command:
diesel migration run
💡 To revert the last migration run diesel migration redo
.
5. cd to the src folder to find a new file called schema.rs. This file contains the table definition created by Diesel that enables us to work with the database in a typesafe way.
With the housekeeping behind us, we proceed to establish a connection between our Rocket instance and Diesel.
Getting Connected
The first rule of working with a database is connecting to a database. A common method of connection between a database and an application is Connection Pool — a data structure that maintains active database connections (pool of connections) which the application can use at any point of time it needs.
Rocket, with its rocket_contrib crate (a crate that adds functionality commonly used by Rocket applications), allows us to easily set up a connection pool to our database using an ORM of our choice. In our case, that’s going to be Diesel.
1. We begin with adding three dependencies to our cargo.toml file: diesel, serde, and rocket_contrib:
[dependencies]
...
diesel = { version = "1.4.5", features = ["postgres"] }
serde = { version = "1.0.123", features = ["derive"] }
[dependencies.rocket_contrib]
git = "https://github.com/SergioBenitez/Rocket"
version = "0.5.0-dev"
default-features = false
features=["json", "diesel_postgres_pool"]
🔭 As we only need certain features from the above crates, we specify these features using the features
property. In our case we only need Postgres support so we specify that in the features list for diesel and rocket_contrib.
2. Add the following import statements to your main.rs file:
#[macro_use]
extern crate diesel;
use diesel::prelude::*;
use diesel::pg::PgConnection;
use rocket_contrib::databases::database;
use rocket_contrib::json::Json;
use serde::{Deserialize, Serialize};
3. Next, we configure the connection settings for our database. Create a new file named Rocket.toml in the root folder of the project with the following content:
[global.databases]
confessions_db = { url = "postgres://<user>:<password>@<ip>/confessions" }
At this point you are probably thinking to yourself: “Johnny WTF? this is exactly the same connection string we configured earlier in the .env file. Can’t we just use that environmental variable and be done with it?”
Of course you can! But I’ll touch on how to do that a little bit later. For now, let’s roll with Rocket.toml.
4. Open your main.rs file and add a new unit-like struct called DBPool
:
#[database("confessions_db")]
pub struct DBPool(PgConnection);
The database
attribute is used to bind a previously configured database to a poolable type in our application. The database
attribute accepts the name of the database
to bind as a single string parameter. This must match a database key configured in Rocket.toml. The macro will generate all the code needed on the decorated type to enable us to retrieve a connection from the database pool later on, or fail with an error.
5. Lastly we need to attach the database to our Rocket instance. We do that using the attach
method of our Rocket instance. The attach
method takes a fairing
(think of that like a middleware) and attaches it to the request flow.
Append the attach
method to the Rocket
instance as follows:
#[launch]
fn rocket() -> Rocket {
rocket::ignite()
.mount("/", routes![root, static_files])
.attach(DBPool::fairing())
}
🍲 Before we proceed, I’d like to take a quick (completely optional) detour and talk about how to use the connection string from the .env file instead of duplicating it with Rocket.toml. If you don’t feel like messing around with creating a database procedurally, feel free to skip over to the next section — Working With Models.
1. If created earlier, delete the Rocket.toml file from the root folder of the project.
2. dotenv is a crate that makes it super easy to work with environmental variables from a .env file. Add the dotenv crate as a dependency in your Cargo.toml file:
[dependencies]
...
dotenv = "0.15.0"
3. In main.rs, refactor your rocket
function as follows:
#[launch]
fn rocket() -> Rocket {
dotenv().ok();
let db_url = env::var("DATABASE_URL").unwrap();
let db: Map<_, Value> = map! {
"url" => db_url.into(),
"pool_size" => 10.into()
};
let figment = rocket::Config::figment().merge(("databases", map!["confessions_db" => db]));
rocket::custom(figment)
.mount("/", routes![root, static_files])
.attach(DBPool::fairing())
}
🔬So what do we have here?
- Line 3: We call the
dotenv
function (from the dotenv crate) to load variables found in the root folder .env file into Rust’s environment variables. - Line 5: Using Rust’s standard library we load up the value of the
DATABASE_URL
variable. - Lines 6–9: We define a
Map
that holds two keys — url for the connection string andpool_size
for the size of the connection pool. - Line 10: We create a Rocket config using Figment and add our database name (
confessions_db
) as a key to thedatabases
collection. This closely resembles the Rocket.toml file and for a good reason — its basically the same thing just done programmatically instead of a toml formatted file. - Line 12: Instead of initializing the Rocket instance using
ignite
, we use thecustom
method, passing on the Figment configuration object created on line 10.
—
Before moving on to creating the Database models, let’s build the project to make sure everything compiles as expected.
🏗️ When I tried to compile the project on my Macbook, I got a compilation error related to Diesel stating that I’m missing libpq
. If you happen to get the same, follow these steps:
1. Install libpq
using homebrew: brew install libpq
2. In the project root folder create a new folder named .cargo and inside of it create a new file called config with the following content:
# for Apple Silicon Macs
[target.aarch64-apple-darwin]
rustflags = ["-L", "/opt/homebrew/opt/libpq/lib"]
# for Intel Macs
[target.x86_64-apple-darwin]
rustflags = ["-L", "/usr/local/opt/libpq/lib"]
3. Run cargo build
and enjoy.
Working With Models
To represent our database table in a type-safe way, we need to create a model (struct) that represents it. Think of a model as the link connecting your database table with your Rust code.
- In your src folder create a new file called models.rs. This file will be the home for the models we use in our project.
- We begin with the
Confession
model, which is used when querying the database. Add theConfession
struct to models.rs:
use crate::schema::confessions;
use serde::{Serialize};
#[derive(Queryable, Serialize)]
pub struct Confession {
pub id: i32,
pub confession: String,
}
Our struct looks identical to the Postgres table schema we created earlier (you can peek into schema.rs for a reminder on how it looks). But what is this Queryable
attribute on top of it? It is a Diesel attribute that basically marks this struct as a READABLE result from the database. Under the hood, it will generate the code needed to load a result from a SQL query.
📢 The order of the fields in the model matters! Make sure to define them in the same order as the table definition in schema.rs.
3. To save a confession to the database, we don’t need to specify the id property since it's auto-incremented on the database side. For this reason, we will create an additional model in our models.rs file called NewConfession
:
#[derive(Insertable)]
#[table_name = "confessions"]
pub struct NewConfession<'a> {
pub confession: &'a str,
}
We annotate this new model with the Insertable attribute so it can be used to INSERT data to our database. In addition, we also add the tablename_ attribute to specify which table this model is allowed to insert data to.
4. Lastly, we add the schema
and models
modules to main.rs:
mod models;
mod schema;
Handling API Requests
It’s time to add a new route handler to our Rocket instance to handle POST requests containing new confessions.
1. In main.rs, add a new struct named ConfessionJSON
, which represents the JSON data sent to us from the browser:
use rocket_contrib::json::Json;
use serde::{Deserialize, Serialize};
use models::{Confession, NewConfession};
#[derive(Deserialize)]
struct ConfessionJSON {
content: String,
}
2. Add a new struct named NewConfessionResponse
which represents the JSON response we send back to the browser upon adding a new confession:
#[derive(Serialize)]
struct NewConfessionResponse {
confession: Confession,
}
3. Add a new POST route that will handle requests to /confession:
mod error;
use error::CustomError;
use rocket::response::status::Created;
#[post("/confession", format = "json", data = "<confession>")]
async fn post_confession(
conn: DBPool,
confession: Json<ConfessionJSON>,
) -> Result<Created<Json<NewConfessionResponse>>, CustomError> {
}
🔬So what do we have here?
- Line 6: We define the route using three attributes:
- post — The HTTP verb this route is bound to.
- format — The required content type of the request. In our case we are going to useapplication/json
. Any POST request to /confession which does not have a content type ofapplication/json
will NOT be routed to thepost_confession
handler.
- data — The name of the variable the body will be bound to. In this example, I named the variableconfession
(surrounded by < and >, which is a must), but you can name it anything you like, as long as you name it the same in the handler (line 2). - Lines 7–10: Here we define the handler for the /confession route. We pass
confession
as an argument (same variable from step 1’sdata
attribute) and set its type asConfessionJSON
wrapped by serde’sJson
attribute. Serde will deserialize the request body’s JSON payload as a Rust struct (ConfessionJSON
) giving us a typesafe way to access it. In addition toconfession
we get access to the database connection pool we created earlier, thanks to the attachment of it to our Rocket instance. - Line 10: The handler will return a
Result
containing either a JSON with an HTTP status of 201 (created) or an error (using a custom error that we will write next).
4. Add the implementation for the post_confession
handler:
[post("/confession", format = "json", data = "<confession>")]
async fn post_confession(
conn: DBPool,
confession: Json<ConfessionJSON>,
) -> Result<Created<Json<NewConfessionResponse>>, CustomError> {
let new_confession: Confession = conn
.run(move |c| {
diesel::insert_into(schema::confessions::table)
.values(NewConfession {
confession: &confession.content,
})
.get_result(c)
})
.await?;
let response = NewConfessionResponse {
confession: new_confession,
};
Ok(Created::new("/confession").body(Json(response)))
}
Now this handler might seem scary (👻) in its current form, so let’s break it into smaller chunks:
- Lines 6–14: We create a new confession by calling the run method on our connection pool using Diesel’s
insert_into
method. The method takes the table name as the first argument and then using thevalues
method we pass a struct (of typeNewConfession
as that is ourInsertable
struct) with the data that needs to be saved. Finally, we callawait
on therun
method as it is an asynchronous function. - Lines 16–18: We create a new
NewConfessionResponse
with the result of the insert item query (thenew_confession
variable). - Line 20: We return a
Result
with the newly created confession, wrapped withCreated
to return a status code of 201.
5. If post_confession
fails for whatever reason, it returns a CustomError
error, that we need to create next. Inside the src folder, create a new file called error.rs and add the following content:
use failure::Fail;
use rocket::http::{ContentType, Status};
use rocket::response::{Responder, Response, Result};
use rocket::Request;
use std::io::Cursor;
#[derive(Debug, Fail)]
pub enum CustomError {
#[fail(display = "Database Error {}", 0)]
DatabaseErr(diesel::result::Error),
}
impl From<diesel::result::Error> for CustomError {
fn from(e: diesel::result::Error) -> Self {
CustomError::DatabaseErr(e)
}
}
impl<'r> Responder<'r, 'static> for CustomError {
fn respond_to(self, _: &'r Request<'_>) -> Result<'static> {
let body = format!("Diesel error: {}", self);
let res = Response::build()
.status(Status::InternalServerError)
.header(ContentType::Plain)
.sized_body(body.len(), Cursor::new(body))
.finalize();
Ok(res)
}
}
I won’t go into much detail on what’s happening here, but the main takeaways from this file are:
- We create an
enum
to hold different error types and decorate it using Failure’sFail
attribute (lines 7–11). - We implement the
From
trait so we can support the diesel error type (lines 13–17). - Rocket requires the response of a handler (be it an error or a valid response) to implement the
Responder
trait. We implement this trait on ourCustomError
to display an error (ofdiesel::result::Error
) to the caller (lines 19–28).
6. Our post_confession
handler is now completed 🎉. Let’s mount it to our Rocket instance’s routes
with a new base of /api:
#[launch]
fn rocket() -> Rocket {
rocket::ignite()
.mount("/", routes![root, static_files])
.mount("/api", routes![post_confession])
.attach(DBPool::fairing())
}
#[derive(Serialize)]
struct NewConfessionResponse {
confession: Confession,
}
7. We are finally ready to test our new API! Run the app with cargo run
and on a different terminal run the following curl
:
curl -X POST http://localhost:8000/api/confession -H "Content-Type: application/json" -d '{"content": "I am in love with the girl next door" }'
If all went well you should get back a JSON response with the confession and its new ID.
That was quite a ride, wasn’t it? You’d be happy to know (or not) that adding the GET route — for getting a random confession out of Postgres — is a much simpler task:
1. Add the new get_confession
handler to main.rs:
no_arg_sql_function!(RANDOM, (), "Represents the sql RANDOM() function");
#[get("/confession", format = "json")]
async fn get_confession(conn: DBPool) -> Result<Json<Confession>, CustomError> {
let confession: Confession = conn
.run(|c| {
schema::confessions::table
.order(RANDOM)
.limit(1)
.first::<Confession>(c)
})
.await?;
Ok(Json(confession))
}
Nothing really exciting happening here. We get a connection from the pool (line 5), use the confessions table (line 7) to query for a single random confession (line 1 defines the SQL’s RANDOM
function, lines 8–10 build the query) and eventually returning a JSON of the confession (line 14).
2. Mount the new get_confession
handler to our Rocket instance’s routes
:
#[launch]
fn rocket() -> Rocket {
rocket::ignite()
.mount("/", routes![root, static_files])
.mount("/api", routes![post_confession, get_confession])
.attach(DBPool::fairing())
}
3. Launch 🚀 with cargo run
and in another terminal window run this lovely curl
:
curl http://localhost:8000/api/confession -H "Content-Type: application/json"
Now, what is it that we got back? A random confession from Postgres that’s what!
And with that our API is completed. We have a Rocket web server running with two API endpoints (post and get confessions) and an additional route to handle static content. Let’s move on to the final task for our website — adding the presentation layer.