Structuring my applications

One of the biggest struggles for me, as an app developer, is coming up with an architecture that I'm happy with. It's something I wish other developers talked about more often. I thoroughly enjoyed Kris Wallsmith's SymfonyCon talk. It's very raw and real and doesn't come across as him talking down to anyone at all. Do I agree with everything he says? No, but that's not a bad thing. It's very insightful and I really enjoy taking a peak behind the curtains and seeing how other people do things.

This is my attempt at doing just that. What I'm going to say isn't unique to me but rather me stealing a bunch of good ideas from several developers smarter than me. I want give a big thanks to Mathias Varraes. He's got it figured out and happily helps me sort things out in the DDDinPHP forum. His blog is also chock-full of awesome posts.

Preface

Let me preface this by saying that I use Symfony. The things I will talk about can be applied to any framework though. Please don't let that scare you away!

Packages

A while back I posted a rant on /r/php about the Laravel community going on and on about hexagonal/command-bus architectures. It seemed completely moronic to me, since it's basically a crippled event dispatcher. Oh how wrong I was. I want to take a moment and apologize to Shawn McCool, Jeffrey Way, and Ross Stuck for that. They responded gracefully to my ignorant rant. Needless to say, I'm now a big proponent of using a command bus.

If you're disciplined about using a command bus, it's a fantastic way to only allow a single entry point into your entire application. I'm using Matthias Noback's Simple Bus package for my command bus. It's framework agnostic, but has packages to integrate with Symfony and Doctrine ORM which I make use of.

Next, I pull in Benjamin Eberlei's Assert package. This package is incredibly simple, yet so powerful. It just has a ton of static methods that you can use to check the validity of data. It throws exceptions if the assertion fails. It's such a simple straightforward way to ensure incoming data is valid, keeping your domain valid. After installing that package, I create my own AssertionFailedException and subclass the Assertion class to use my custom exception. I also write my own static assertion methods in this class specific to my domain.

Diggin in

After I install my framework and the packages above, I create a src/Acme/AwesomeProject/Model directory. This is where the meat of my application lives. Code in here is meant to be completely unaware of an http framework.

I also create a complimentary infrastructure directory. Symfony uses bundles for this. Mine would be found in src/Acme/AwesomeProject/Infrastructure/AppBundle. This is where my Symfony implementation lives.

This is what is typically in my Model directory. I'll go over each directory.

Model Directory


Entity

This is where I start. I define my persistable entities in this directory. This is where my first trade-off is. If I were really hardcore about being completely decoupled I would define interfaces for my entities in here instead concrete classes. This would, hypothetically, allow me to be ORM agnostic. That's too much of a hassle for me to be worth it. Because I'm using Doctrine ORM, a data-mapper ORM, there is very little trace of Doctrine in my entites anyway.

Although, I don't like it I'm also implementing Symfony's security UserInterface on my User entity. I don't like it because I want as little to do with any framework in this code as possible. I could get around it if I cared enough but it's just a minor annoyance.

I take special care when defining my entities' constructors. I inject only what is required for that entity to be valid, not all possible fields. I use setters for the optional fields.

namespace Acme\AwesomeProject\Model\Entity;

use Acme\AwesomeProject\Model\Event\AssetWasEntered;
use Acme\AwesomeProject\Model\Validation\Assert;
use DateTime;
use SimpleBus\Message\Recorder\ContainsRecordedMessages;
use SimpleBus\Message\Recorder\PrivateMessageRecorderCapabilities;

class Asset implements ContainsRecordedMessages
{
    use PrivateMessageRecorderCapabilities;

    // fields

    public function __construct($id, $name, Category $category)
    {
        Assert::id($id, "Invalid asset id.");
        Assert::string($name, "Asset name must be a string.");

        $this->id = $id;
        $this->createdAt = new DateTime();
        $this->name = $name;
        $this->category = $category;

        $this->record(new AssetWasEntered($id));
    }

    // methods
}

There are several things to talk about with the code snippet above.

One of the great things about using Simple Bus, the command bus package, is its Doctrine integration. You can configure it to use a middleware so all command handlers get wrapped in a database transaction. Great for keeping the integrity of your data.

There's also a middleware you can use that will dispatch events you've recorded in your entities. As you can see above, I'm implementing a Simple Bus interface and taking advantage of a trait to implement it. That gives me access to the ::record($event) method. The middleware hooks into Doctrine so after an entity is persisted it dispatches all events recorded in your entites.

Now, anywhere I new up this entity and it get's persisted I can be sure this event will be dispatched. I could also record an event if the asset get's renamed. The event could contain the asset id, the old name, and the new name. Maybe I use it for auditing or sending an email alert.

I also pass in the id. I use Ben Ramsey's uuid package to generate these. Using uuid's is so much nicer than using your database's auto-incremented values. You can dispatch events with it before the entity is actually persisted, importing relational data into your system is much much easier, and if you use these in your urls they don't reveal how big...or small your app is. :p

Events and Event Listeners

I only define event classes and record them when I need them though. I could spend all day thinking up of all of the events that could potentially be useful in the future. It's premature clutter.

When it comes to what gets included as fields in my event objects, I only use scalar data, not objects. You'll notice above I only passed the id into the AssetWasEntered event and not the Asset instance. I want to make it dead simple to audit these events. Converting these events into json and storing that in a database is a very powerful, easy thing to do. It's also cheap to refetch these entities by id. Most likely I will have already gotten and persisted the entity in a command handler. If that's the case, it's already being managed by Doctrine and asking for it again won't require executing sql and hydrating a new object. It will simply give me the same instance it's already managing.

It's important to note that these events get dispatched after the command is handled, which is wrapped in a database transaction. Consider this scenario: You have some crazy business rule where if the asset's name starts with the letter "B", then another field needs to be updated and a related entity needs to be updated. Rather than satisfy this requirement directly in an event listener, I will write another command and command handler and name them something appropriate to that particular requirement. I'll use the event listener to new up that command and the command bus to handle it. The listener is just glue. Now I have the bit of data manipulation named very explicitly in code and could use it elsewhere if needed. Event listeners are very similar to a framework's controllers. They're a thin layer that pass application commands into the bus.

I ran into a tricky issue that I had to deal with: delete events. They're not like creation events, you can't just raise them in the __destruct() method. No one really has control over when that gets executed(unless you call it explicitly).

I have a business requirement that dictates an email needs to be sent when a Task for a Project gets deleted. I had a couple of ideas. The first was to dispatch an event in my command handler after I called $tasks->remove($task). I also thought of creating a public method on the task, $task->markForDeletion(), which would record the event in the entity. Both of these are flawed though. What happens in another command handler when I just delete the tasks' owning project? My ORM will delete the project and also all of its tasks, not giving me a chance to dispatch events for each task.

I finally came to the realization that I only had one way to reliably ensure that delete events for my entities occur: I have to hook directly in my ORM. I have an event listener tied to my ORM that listens for all entity deletions and fires the appropriate event for each entity type.

Commands and Handlers

This is where the business logic lives. A command has a single handler. Commands convey user intent. Events are by-products of those intents. The commands are almost identical to event objects. I don't pass objects into them, strictly scalar data. My http framework code is responsible for mapping http requests into command objects. I do, however, often convert that raw data into value objects.

namespace Acme\AwesomeProject\Model\Command;

use Acme\AwesomeProject\Model\ValueObject\PersonName as Name;
use Acme\AwesomeProject\Model\ValueObject\Location;
use SimpleBus\Message\Message;

class TrackPersonCommand implements Message
{
    // fields

    public function __construct(
        $userId, 
        $firstName,
        $lastName, 
        $address,
        $city,
        $state,
        $zipCode
    ) {
        // assignments
    }

    public function getName()
    {
        return new Name(
            $this->firstName, 
            $this->lastName
        );
    }   

    public function getLocation()
    {
        return new Location(
            $this->address, 
            $this->city, 
            $this->state, 
            $this->zipCode
        );
    }
}

Remember that Assert package? In the constructors of these value objects, PersonName and Location, I'm using assertions to ensure that data is valid. Now it's super easy for my command handlers to work with validated data.

namespace Acme\AwesomeProject\Model\Handler;

use Acme\AwesomeProject\Model\Command\TrackPersonCommand;
use Acme\AwesomeProject\Model\Repository\PersonRepository;
use SimpleBus\Message\Handler\MessageHandler;
use SimpleBus\Message\Message;

class TrackPersonHandler implements MessageHandler
{
    // fields

    public function __construct(PersonRepository $people)
    {
        // assignment
    }

    /**
     * @param TrackPersonCommand|Message $command
     */
    public function handle(Message $command)
    {
        $person = new Person(
            $command->getPersonId(),
            $command->getName(),
            $command->getLocation()
        );

        $this->people->track($person);
    }
}

Look how easy to read that handler is. It's amazingly clear what's going on and we can be sure the data is valid because of our assertions.

What about validation that requires the database? Maybe the people we're tacking also require a unique nickname. We have a couple of options.

We could define a middleware class that wraps our command bus, checks if the command is an instance of TrackPersonCommand, and if it is, attempt to get a user by that nickname, ie. $people->findByNickName($command->getNickName()). If the result is not null, throw a validation exception.

Or, we could simply put that check directly in the handler. I go with the latter.

I also use commands as DTO's for retrieval of entities. For example, I would have a GetPersonCommand and handler. I was tempted to just interact with my person repository directly in my http framework's controller. It seemed like overkill to create a command and handler for that...until I realized I would be losing out on the middleware. I wouldn't have a command to serialize for auditing, or caching, or authorization. Even though I stated that I strictly keep scalar data inside of my commands, I lied. I put setters and getters on commands for entity retrieval. That way when I create the command object in my controller, I can pass it to the command bus to be handled. If no exception's are thrown I can just use the command's getter to get the entity for display. My controllers don't need to be aware of repositories.

namespace Acme\AwesomeProject\Infrastructure\AppBundle\Controller;

use Acme\AwesomeProject\Model\Command\GetPersonCommand;
use Symfony\Component\HttpFoundation\Response;

class PersonController extends ApiController
{
    public function getPersonAction($assetId)
    {
        $command = new GetPersonCommand($assetId);
        $this->getCommandBus()->handle($command);

        $person = $command->getPerson();

        return $this
            ->setStatusCode(Response::HTTP_OK)
            ->setData(['person' => $person])
            ->respond();
    }
}

Exceptions

These are exceptions that are unique to my app's domain. Some examples are: EntityNotFoundException, ValidationException, AccessDeniedException. All of these exceptions extend my own DomainException. In your framework's exception handler code you can check if the exception is an instance of DomainException. If it is, show the exception's message and set the response code to the exception's code. Of course this means your exception's code must map to valid http status codes. If the exception isn't one of your own, return a 500 with a generic "Internal server error" message.

namespace Acme\AwesomeProject\Model\Exception;

class AccessDeniedException extends DomainException
{
    public function __construct($message = 'Access Denied', \Exception $previous = null)
    {
        parent::__construct($message, 403, $previous);
    }
}

Provider

This directory currently only holds a single interface, CurrentUserProvider. It has a single method, $provider->getUser(). My framework's implementation of this interface lives outside of the Model directory.

Repository

This directory holds interfaces for my repositories. It contains no implementations. My Doctrine implementations live outside of the Model directory.

namespace Acme\AwesomeProject\Model\Repository;

interface UserRepository
{
    /**
     * @param string $id
     * @return User
     * @throws UserNotFoundException
     */
    public function find($id);

    /**
     * @param string $email
     * @return User
     * @throws UserNotFoundException
     */
    public function findByEmail($email);

    /**
     * @param string $token
     * @return User
     * @throws UserNotFoundException
     */
    public function findByConfirmationToken($token);

    /**
     * @param User $user
     */
    public function add(User $user);

    /**
     * @param User $user
     */
    public function remove(User $user);
}

Security

It took me a while to figure out a clean way to incorporate authorization into my application without referencing an outside security component. Symfony provides the concept of security voters into its security component but this coupled my authorization directly to an outside security component. That bugged me.

Because I'm using a command bus architecture and have a single point of entry, middleware made perfect sense. I created a Middleware directory inside the Security directory. This is where I'm enforcing authorization.

I've defined a base abstract AuthMiddleware class which all of my auth middlewares will extend.

namespace Acme\AwesomeProject\Model\Security\Middleware;

use Acme\AwesomeProject\Model\Entity\User;
use Acme\AwesomeProject\Model\Exception\AccessDeniedException;
use Acme\AwesomeProject\Model\Provider\CurrentUserProvider;
use SimpleBus\Message\Bus\Middleware\MessageBusMiddleware;
use SimpleBus\Message\Message;

abstract class AuthMiddleware implements MessageBusMiddleware
{
    /**
     * @var CurrentUserProvider
     */
    protected $userProvider;

    /**
     * @param CurrentUserProvider $userProvider
     */
    public function __construct(CurrentUserProvider $userProvider = null)
    {
        $this->userProvider = $userProvider;
    }

    /**
     * {@inheritdoc}
     */
    public function handle(Message $command, callable $next)
    {
        if ($this->applies($command)) {
            $this->beforeHandle($command);
            $next($command);
            $this->afterHandle(($command));
        } else {
            $next($command);
        }
    }

    /**
     * Do you auth check before the command is handled.
     *
     * @param Message $command
     * @throws \Exception
     */
    protected function beforeHandle(Message $command)
    {
        // no-op
    }

    /**
     * Do you auth check after the command is handled.
     *
     * @param Message $command
     * @throws \Exception
     */
    protected function afterHandle(Message $command)
    {
        // no-op
    }

    /**
     * @param string $msg
     * @throws AccessDeniedException
     */
    protected function denyAccess($msg = null)
    {
        throw new AccessDeniedException($msg ?: "Access denied.");
    }

    /**
     * @return User
     */
    protected function getUser()
    {
        return $this->userProvider->getUser();
    }

    /**
     * @param Message $command
     * @return bool
     */
    abstract protected function applies(Message $command);
}

This gives me a nice little framework for implementing auth middleware.

namespace Acme\AwesomeProject\Model\Security\Middleware;

use Acme\AwesomeProject\Model\Command\RelocatePersonCommand;
use SimpleBus\Message\Message;

class RelocatePersonCommandMiddleware extends AuthMiddleware
{
    //fields

    public function __construct(CurrentUserProvider $userProvider, PersonRepository $people)
    {
        // assignment
    }

    /**
     * @param RelocatePersonCommand|Message $command
     */
    protected function beforeHandle(Message $command)
    {
        $person = $this->people->find($command->getPersonId());

        if ($person->getCreatedBy() !== $this->getUser()) {
            $this->denyAccess();
        }
    }

    /**
     * {@inheritdoc}
     */
    protected function applies(Message $command)
    {
        return get_class($command) === RelocatePersonCommand::CLASS;
    }
}

Service

This is where I put general services; interfaces or utility services. For example, I have a UserMailer interface defined. In my framework code, outside of the Model directory, I have a TwigSwiftMailerUserMailer implementation.

Tests

This is where I put my unit tests. These are tests that do not require a database connection, web server, etc. They are strictly for unit tests, not acceptance, functional, etc.

Validation

This is where my subclass of the Assertion class lives.

ValueObject

This is where I put objects that don't require id's and are strictly for transferring data. Some examples are PersonName, Location, and UserCategorySummary. That last one is interesting. Sometimes I have endpoints that show dashboard views. They contain counts of certain entities.

namespace Acme\AwesomeProject\Model\ValueObject;

class CategorizedInvoiceSummary
{
    //fields

    public function __construct(
        $userId, 
        $categoryId,
        $totalInvoices,
        $overdueInvoices,
        $paidInvoices
    ) {
        // assignment
    }

    public function getUserId()
    {
        return $this->userId;
    }

    public function getCategoryId()
    {
        return $this->categoryId;
    }

    // etc...
}

Maybe my application is some sort of ecommerce platform. The business requires to be able to view a listing view of invoices by category. I would have a StatRepository and something like, $stats->getCategorizedInvoiceSummaries($user), which would return a collection of the above objects.

If you're using Doctrine ORM, your entities can have value objects as fields and you can map those to appropriate columns. This cleans up your entity code quite a bit. It's a great feature.

In summary

Other developers should be able to peruse my Model directly and it be abundantly clear what the application can do. They won't be bogged down mentally with the implementation details. That is a beautiful thing.

I don't work on this Model directory in isolation. I'm bouncing from it to framework implementation code as I go. In part 2 of this series I will go over some framework specific code and issues I ran into and how I dealt with those. Read on!

Tags: Symfony, PHP

Exploring ember-data

Ember-data was a big struggle when I first started to learn EmberJs. Let's explore ember-data through some living and breathing examples using JsonStub and ember-data's RESTAdapter.

Get some records

Usually, the first thing we need to be able to do is retrieve a list of records. Let's get a list of project models that makes use of every attribute type ember-data has to offer. Click on the Projects link below to go to the projects route to get the models. You can then click on a project to visit the project.

Here's what the json being returned looks like. Pretty straight-forward stuff.

{
    "projects": [
        {
            "id": 1,
            "title": "Develop Awesome Feature",
            "participantCount": 5,
            "created": "2014-11-12T15:30:00",
            "public": true
        },
        {
            "id": 2,
            "title": "Squash Annoying Bug",
            "participantCount": 2,
            "created": "2014-11-15T08:45:00",
            "public": false
        }
    ]
}
Tip! The bins too small? Just click on the JS Bin link in the top left of a bin to open it up in a new tab. Once you've done that, you can see the *actual* http requests being made by opening your browser's developer console.

Tags: Javascript, Ember, Ember-data

How I set up my server

As time has passed I've grown more and more comfortable developing applications. I've formulated my own opinions about architecture, design patterns, packages, testing, etc. I finally feel like I have a fairly decent grasp on developing modern php applications.

However, I've recently come to the realization that I, as a web application developer, lack skills in one huge area. That area would be server management. This is my brain dump of opinions, what I've learned, and tools I've found that have made me much more confident when working with servers.

In the beginning

As a Vagrant noob at the time, I was thrilled to death when sites like PuPHPet popped up. I could just use a web GUI to pick out the pieces of my stack and voilĂ , my stack is up and running in VM that emulates my production box. It was awesome....until I needed to tweak something. Then I was back where I started, scared of the command line and unsure of how to accomplish what I needed to. That was a bit of a problem.

According to the internets, I needed to learn how to use a provisioning tool like Puppet, Chef, Ansible, Salt, etc. This was how real developers got things done on their server! After mucking around with Puppet(and hating my life) I was so frustrated. Puppet is hard. I wasn't getting anywhere and I wasting time attempting to learn this tool while I could be developing.

My provisioning revelation

I finally came to the conclusion that I was trying to run before I could even crawl. I needed to figure out how to do things on the server manually via the command line before I attempt to use a provisioning tool, which abstracts that away.

That's about the time I found Servers for Hackers and Vaprobash. Servers for Hackers is an email newsletter/blog/book for developers to get more familiar with their servers. Vaprobash is a set of provisiong shell scripts to use with Vagrant to set up a VM. No fancy provisiong tool here. It is such a good resource to look over to see how to install and configure things by hand. @fideloper is doing an awesome job with both of those. Thanks Chris!

Poring over both of those was such a gigantic leap forward in making me feel comfortable in the command line. I would use Vagrant to get a fresh VM up with nothing installed. I would then reference Vaprobash and other resources I found to install and configure software from the command line. If I screwed something up beyond repair I wasn't worried at all. I'd just destroy the VM and start over. I learned SO much from doing that.

I got to the point where installing and configuring Nginx, PHP-FPM, Postgres, Redis, etc wasn't a big deal. Yes, it took some hours getting familiar with them but I now know exactly how to work with them. I was able to get a real server up and running on DigitalOcean. I even learned how to use both Capifony and Rocketeer to deploy my code.

Reliability with Monit

You'd think at this point I'd be feeling pretty damn good. Well, I did and I didn't. What happens if any of my services decide to stop? I'd be in a bit of a pickle if nginx just stopped or my php-fpm got killed. The other part of that insecurity was wondering how I would know if one of those services stopped? Getting emails from my app's users didn't sit well with me at all.

After doing a bit of searching I came across Monit. Monit is a "utility for managing and monitoring Unix systems". It's exactly what I needed to ease my nerves. It will sit there and poll services every so often and determine if the service is running or not. If it isn't, it will attempt to restart the service automatically. It was also dead simple to install and start monitoring services. I set it up to monitor nginx, php-fpm, postgres, and redis. You can even configure it to send you email notifications when services stop, successful restarts, pid's of services change, and more. I referenced this page and configured it to use my mailgun account to send me emails. I felt so much more at ease after learning about and using Monit.

Here's how simple Monit is to configure. This keeps nginx running.

# /etc/monit/conf.d/nginx.conf

check process nginx with pidfile /var/run/nginx.pid
    start program = "/etc/init.d/nginx start"
    stop program = "/etc/init.d/nginx stop"

There's a ton of examples illustrating how to get started monitoring various services.

Tip! To use the monit status command you need to have configured the web service. You can just set the "use address" and "allow" address to "localhost". See this gist.

Visualizing health with Graphite and StatsD

Everything was running smoothly at that point in time but I still had no idea how to monitor the health of my server and application. I didn't even know what "healthy" meant. How much RAM and CPU usage is normal? How close are my disks to being filled up? How much memory is php using in a typical http request?

I needed a way to keep keep track of these things. Luckily, I aleady knew about Graphite and StatsD. Graphite is a python web application. You can send it any kind of stat or metric you can thank of over http, it will store it, and then you can visualize those stats by building graphs. Graphite is also incredibly cool because you don't need to do any set up for each statistic you want to send. You just start sending that stat and you're good to go. There's no configuration on a per stat basis.

Instead of sending stats directly to graphite, you can send them via UDP to StatsD, a nodejs daemon built by the folks at Etsy. Because of the nature of UDP, when your web application sends stats to your StatsD daemon, it doesn't wait for a result. This is fantastic. It means you can send as many stats and track as many things as you want without worrying about your code slowing down. It's brilliant.

After becoming comfortable in the command line I came up with a provisioning script to install Graphite and StatsD in local VM to play around with a while ago. However, trying to follow the same steps on a newer flavor linux lead to a miserable failure. Luckily I found a guide on installing Graphite and StatsD on the version of linux I was using(14.04). Following it was so much easier than what I went through when I was coming up with my provisioning script.

I got it all this up on a a 512Mb DigitalOcean droplet and I was using Monit to ensure the required services stayed up...except they weren't. I found that some services kept dying because they were running out of RAM. I had been meaning to learn how to use DigitalOcean's snapshots and this was a perfect time. I created a 1G droplet and have been good ever since. Spending $10/month to be able to track statistics for all of my projects is money well spent in my opinion.

Collecting stasts with collectd

In the Graphite installation guide you'll notice a section on collectd. Collectd is a daemon that will collect metrics from your server every so often. You install this on your application server. With the plugins that it came installed with, I configured it to gather the following stats: RAM usage, CPU usage, disk space, nginx load, and system load. I also made use of the postgres plugin to query my database to report the counts of certain tables. I'm reporting how many users and other of records specific to my application exist. It comes with a plugin for Graphite. You just tell it the ip, port, and a few other settings and just like that I was reporting the health of my server.

I ran into a few issues when i was configuring collectd though. When reporting on disk size i needed to figure out which drives I had available. I used the df command to see that i had a /dev/vda drive. I configured the plugin according to the docs but wasn't getting any stats. It took me a while to figure out that I had configured it for the wrong type of drive. I had specified "ext3" when it should have been "ext4". You can use the blkid command to show what kind of drive you have. Ie. blkid /dev/vda

The other issue I ran into was trying to configure the postgres plugin to gather count metrics for my tables. I finally fooled around with it long enough and figured out I needed to be using the "gauge" metric type in the plugin.

# /etc/collectd/collectd.conf

# other plugin configuration...

<Plugin postgresql>
    <Query users>
        Statement "SELECT count(*) as count from users;"
        <Result>
            Type gauge
            InstancePrefix "users"
            ValuesFrom "count"
        </Result>
    </Query>
    <Database your_database_name>
        Host "localhost"
        Port 5432
        User "your_database_user"
        Password "your_database_password"

        SSLMode "prefer"

        Query users
    </Database>
</Plugin>

You can define as many Query blocks as you want, just remember to reference them in the Database block. You can even use several databases. All you need to do is define another Database block.

Because collectd is constantly sending stats to my Graphite instance I can create graphs to visualize how how my server is performing over time. I now have a baseline. I now know what "healthy" means. If graphs start spiking I know there's a problem. I can also pat myself on the back when I see my database counts are growing because that means people are using my app! :)

Slick graphs with Grafana

So hopefully I've sold you on Graphite but it gets even better. Your graphs can be even slicker. Allow me to introduce you to Grafana, an AngularJS frontend for Graphite. You can define your graphs in Grafana and it will use your Graphite's REST api to gather metrics to render graphs. It stores your graph definitions in elasticsearch. That's the only requirement that Grafana has aside from a web server and a Graphite to get data from.

I've created two dashboards so far. One to visualize the health of my server and one for seeing the activity of my application. They're glorious.


I'm tracking RAM, CPU, Disk Space, Nginx requests, php-fpm memory used per request, and system load.


In my application dashboard, I'm tracking how many requests my app is getting, user count, response time for each request, counts of other of my application database records. Although it's not depicted in the images above, I'm also sending a "release" stat in my deployment process. Once I graph those as vertical lines I'll be able to tell if a release had performance impacts.

Regular database backups to S3

The last bit of assurance I'll touch on is database backups. I've signed up for a free Amazon Web Services account to use S3 for free storage. I installed the aws command line tool and am using it in a cron job to back up my database nightly. Here's the contents up my backup shell script.

#!/usr/bin/env bash

# /root/scripts/cron/backup_database.sh

# remember to chmod 755 this script

# dump database to the postrgres user's home directory
sudo -u postgres pg_dump my_database_name > /var/lib/postgresql/my_database_name.sql

# send to s3
aws s3 cp /var/lib/postgresql/my_database_name.sql s3://myappname/backups/my_database_name.sql

# let me know by sending a stat
echo "myapp.db.backup:1|c" | nc -u -w0 {my-statsd-ip} {my-statsd-port}

I then added this bit to my crontab.

# /etc/crontab

# other entries...

30 2 * * * root . /root/.bash_profile; /root/scripts/cron/backup_database.sh > /dev/null 2>&1

So what's happening here? The first part, 30 2 * * *, specifies when to run the script should run. This will run the shell script every night at 2:30 am. The second part, root, specifies which user the script should run as. The next bit, . /root/.bash_profile;, basically says to use that file for environment variables. When installing the aws cli tool, I put my aws key id and secret in that file and exported them so they would be available to the root user. The next part is the actual script to run. The last part, > /dev/null 2>&1, specifies where the output of the script should go. This just sends the output into a black hole because there is no output and I wouldn't care about it even if there was any.

That's all!

That wraps it up so far! I feel like I've come such a long way when it comes to server admin stuff. I'm much more confident in my web applications staying alive. The tools I've learned about are incredibly cool and useful. I hope you've learned a bit and if not maybe you have some tips for me. I'd love to hear them!

Tags: Linux