How I set up my server

As time has passed I've grown more and more comfortable developing applications. I've formulated my own opinions about architecture, design patterns, packages, testing, etc. I finally feel like I have a fairly decent grasp on developing modern php applications.

However, I've recently come to the realization that I, as a web application developer, lack skills in one huge area. That area would be server management. This is my brain dump of opinions, what I've learned, and tools I've found that have made me much more confident when working with servers.

In the beginning

As a Vagrant noob at the time, I was thrilled to death when sites like PuPHPet popped up. I could just use a web GUI to pick out the pieces of my stack and voilĂ , my stack is up and running in VM that emulates my production box. It was awesome....until I needed to tweak something. Then I was back where I started, scared of the command line and unsure of how to accomplish what I needed to. That was a bit of a problem.

According to the internets, I needed to learn how to use a provisioning tool like Puppet, Chef, Ansible, Salt, etc. This was how real developers got things done on their server! After mucking around with Puppet(and hating my life) I was so frustrated. Puppet is hard. I wasn't getting anywhere and I wasting time attempting to learn this tool while I could be developing.

My provisioning revelation

I finally came to the conclusion that I was trying to run before I could even crawl. I needed to figure out how to do things on the server manually via the command line before I attempt to use a provisioning tool, which abstracts that away.

That's about the time I found Servers for Hackers and Vaprobash. Servers for Hackers is an email newsletter/blog/book for developers to get more familiar with their servers. Vaprobash is a set of provisiong shell scripts to use with Vagrant to set up a VM. No fancy provisiong tool here. It is such a good resource to look over to see how to install and configure things by hand. @fideloper is doing an awesome job with both of those. Thanks Chris!

Poring over both of those was such a gigantic leap forward in making me feel comfortable in the command line. I would use Vagrant to get a fresh VM up with nothing installed. I would then reference Vaprobash and other resources I found to install and configure software from the command line. If I screwed something up beyond repair I wasn't worried at all. I'd just destroy the VM and start over. I learned SO much from doing that.

I got to the point where installing and configuring Nginx, PHP-FPM, Postgres, Redis, etc wasn't a big deal. Yes, it took some hours getting familiar with them but I now know exactly how to work with them. I was able to get a real server up and running on DigitalOcean. I even learned how to use both Capifony and Rocketeer to deploy my code.

Reliability with Monit

You'd think at this point I'd be feeling pretty damn good. Well, I did and I didn't. What happens if any of my services decide to stop? I'd be in a bit of a pickle if nginx just stopped or my php-fpm got killed. The other part of that insecurity was wondering how I would know if one of those services stopped? Getting emails from my app's users didn't sit well with me at all.

After doing a bit of searching I came across Monit. Monit is a "utility for managing and monitoring Unix systems". It's exactly what I needed to ease my nerves. It will sit there and poll services every so often and determine if the service is running or not. If it isn't, it will attempt to restart the service automatically. It was also dead simple to install and start monitoring services. I set it up to monitor nginx, php-fpm, postgres, and redis. You can even configure it to send you email notifications when services stop, successful restarts, pid's of services change, and more. I referenced this page and configured it to use my mailgun account to send me emails. I felt so much more at ease after learning about and using Monit.

Here's how simple Monit is to configure. This keeps nginx running.

# /etc/monit/conf.d/nginx.conf

check process nginx with pidfile /var/run/nginx.pid
    start program = "/etc/init.d/nginx start"
    stop program = "/etc/init.d/nginx stop"

There's a ton of examples illustrating how to get started monitoring various services.

Tip! To use the monit status command you need to have configured the web service. You can just set the "use address" and "allow" address to "localhost". See this gist.

Visualizing health with Graphite and StatsD

Everything was running smoothly at that point in time but I still had no idea how to monitor the health of my server and application. I didn't even know what "healthy" meant. How much RAM and CPU usage is normal? How close are my disks to being filled up? How much memory is php using in a typical http request?

I needed a way to keep keep track of these things. Luckily, I aleady knew about Graphite and StatsD. Graphite is a python web application. You can send it any kind of stat or metric you can thank of over http, it will store it, and then you can visualize those stats by building graphs. Graphite is also incredibly cool because you don't need to do any set up for each statistic you want to send. You just start sending that stat and you're good to go. There's no configuration on a per stat basis.

Instead of sending stats directly to graphite, you can send them via UDP to StatsD, a nodejs daemon built by the folks at Etsy. Because of the nature of UDP, when your web application sends stats to your StatsD daemon, it doesn't wait for a result. This is fantastic. It means you can send as many stats and track as many things as you want without worrying about your code slowing down. It's brilliant.

After becoming comfortable in the command line I came up with a provisioning script to install Graphite and StatsD in local VM to play around with a while ago. However, trying to follow the same steps on a newer flavor linux lead to a miserable failure. Luckily I found a guide on installing Graphite and StatsD on the version of linux I was using(14.04). Following it was so much easier than what I went through when I was coming up with my provisioning script.

I got it all this up on a a 512Mb DigitalOcean droplet and I was using Monit to ensure the required services stayed up...except they weren't. I found that some services kept dying because they were running out of RAM. I had been meaning to learn how to use DigitalOcean's snapshots and this was a perfect time. I created a 1G droplet and have been good ever since. Spending $10/month to be able to track statistics for all of my projects is money well spent in my opinion.

Collecting stasts with collectd

In the Graphite installation guide you'll notice a section on collectd. Collectd is a daemon that will collect metrics from your server every so often. You install this on your application server. With the plugins that it came installed with, I configured it to gather the following stats: RAM usage, CPU usage, disk space, nginx load, and system load. I also made use of the postgres plugin to query my database to report the counts of certain tables. I'm reporting how many users and other of records specific to my application exist. It comes with a plugin for Graphite. You just tell it the ip, port, and a few other settings and just like that I was reporting the health of my server.

I ran into a few issues when i was configuring collectd though. When reporting on disk size i needed to figure out which drives I had available. I used the df command to see that i had a /dev/vda drive. I configured the plugin according to the docs but wasn't getting any stats. It took me a while to figure out that I had configured it for the wrong type of drive. I had specified "ext3" when it should have been "ext4". You can use the blkid command to show what kind of drive you have. Ie. blkid /dev/vda

The other issue I ran into was trying to configure the postgres plugin to gather count metrics for my tables. I finally fooled around with it long enough and figured out I needed to be using the "gauge" metric type in the plugin.

# /etc/collectd/collectd.conf

# other plugin configuration...

<Plugin postgresql>
    <Query users>
        Statement "SELECT count(*) as count from users;"
        <Result>
            Type gauge
            InstancePrefix "users"
            ValuesFrom "count"
        </Result>
    </Query>
    <Database your_database_name>
        Host "localhost"
        Port 5432
        User "your_database_user"
        Password "your_database_password"

        SSLMode "prefer"

        Query users
    </Database>
</Plugin>

You can define as many Query blocks as you want, just remember to reference them in the Database block. You can even use several databases. All you need to do is define another Database block.

Because collectd is constantly sending stats to my Graphite instance I can create graphs to visualize how how my server is performing over time. I now have a baseline. I now know what "healthy" means. If graphs start spiking I know there's a problem. I can also pat myself on the back when I see my database counts are growing because that means people are using my app! :)

Slick graphs with Grafana

So hopefully I've sold you on Graphite but it gets even better. Your graphs can be even slicker. Allow me to introduce you to Grafana, an AngularJS frontend for Graphite. You can define your graphs in Grafana and it will use your Graphite's REST api to gather metrics to render graphs. It stores your graph definitions in elasticsearch. That's the only requirement that Grafana has aside from a web server and a Graphite to get data from.

I've created two dashboards so far. One to visualize the health of my server and one for seeing the activity of my application. They're glorious.


I'm tracking RAM, CPU, Disk Space, Nginx requests, php-fpm memory used per request, and system load.


In my application dashboard, I'm tracking how many requests my app is getting, user count, response time for each request, counts of other of my application database records. Although it's not depicted in the images above, I'm also sending a "release" stat in my deployment process. Once I graph those as vertical lines I'll be able to tell if a release had performance impacts.

Regular database backups to S3

The last bit of assurance I'll touch on is database backups. I've signed up for a free Amazon Web Services account to use S3 for free storage. I installed the aws command line tool and am using it in a cron job to back up my database nightly. Here's the contents up my backup shell script.

#!/usr/bin/env bash

# /root/scripts/cron/backup_database.sh

# remember to chmod 755 this script

# dump database to the postrgres user's home directory
sudo -u postgres pg_dump my_database_name > /var/lib/postgresql/my_database_name.sql

# send to s3
aws s3 cp /var/lib/postgresql/my_database_name.sql s3://myappname/backups/my_database_name.sql

# let me know by sending a stat
echo "myapp.db.backup:1|c" | nc -u -w0 {my-statsd-ip} {my-statsd-port}

I then added this bit to my crontab.

# /etc/crontab

# other entries...

30 2 * * * root . /root/.bash_profile; /root/scripts/cron/backup_database.sh > /dev/null 2>&1

So what's happening here? The first part, 30 2 * * *, specifies when to run the script should run. This will run the shell script every night at 2:30 am. The second part, root, specifies which user the script should run as. The next bit, . /root/.bash_profile;, basically says to use that file for environment variables. When installing the aws cli tool, I put my aws key id and secret in that file and exported them so they would be available to the root user. The next part is the actual script to run. The last part, > /dev/null 2>&1, specifies where the output of the script should go. This just sends the output into a black hole because there is no output and I wouldn't care about it even if there was any.

That's all!

That wraps it up so far! I feel like I've come such a long way when it comes to server admin stuff. I'm much more confident in my web applications staying alive. The tools I've learned about are incredibly cool and useful. I hope you've learned a bit and if not maybe you have some tips for me. I'd love to hear them!

Tags: Linux

Behat And Selenium In Vagrant

Even with testing frameworks like PHPUnit or PHPSpec, I've still felt like something was missing. Unit tests made me sleep a little better at night, knowing my objects were interacting the way they should be...but my tests still didn't give me the dreams I had hoped for. I felt that even though I could unit test the balls off my application, I didn't have anything in place to actually test what users would experience. Does the app actually function in the browser? Do all the forms work as expected? Do all the redirects work and go to the right places? Will the user see certain validation messages in different scenarios? Enter Behat.

Behat is a testing framework that allows you to test what your end users will experience. It's the perfect compliment to PHPUnit and/or PHPSpec. I'll be honest, Behat made me nervous. There are quite a few different components that go into this testing framework: Gherkin, Mink, Goutte, Selenium2, and getting it to work with a framework. It's pretty daunting, but so worth it. If you don't know anything about Behat I suggest watching Knp University's video. It's a great overview of what Behat can do for you.

If you're a Laracast member, Jeffrey Way also has a good video on Behat. He doesn't go into testing with Selenium though.

This is not a Behat tutorial

This post isn't to sell you on the idea of Behat or even a tutorial about how to use it. There are several other tutorials and videos about that. This post is about getting it up and running with Symfony in a Vagrant box. Even though I'm using Symfony, I'm fairly certain the steps can be modified slightly to accommodate any framework.

Prerequisites

I'm using a super simple Vagrant box that I've manually installed apache, php, composer, postgres, and Symfony on. Here's my Vagrantfile. I've also added a line to my mac's /etc/hosts file to point my project's url to my Vagrant box's ip.

Check out Vaprobash and/or PuPHPet for some VM provisioning if you're not use to setting up a VM. Both are awesome resources.

Install via composer

First thing we need to do is to pull in the framework and some extensions via composer.

{
    "require": {
        "other-dependencies": "*",
        "behat/symfony2-extension": "*",
        "behat/mink-goutte-driver": "*",
        "behat/mink-selenium2-driver": "1.2.*@dev",
        "behat/mink-extension": "*",
        "behat/mink-browserkit-driver": "*"
    }
}


Install the dependencies, composer update.

Create a "test" entry point

Next, we need to create an entry point to ensure all our requests into the application are in a "test" environment. Let's create the entry point.

<?php // MySymfonyProject/Web/app_test.php

use Symfony\Component\HttpFoundation\Request;
use Symfony\Component\Debug\Debug;

$loader = require_once __DIR__.'/../app/bootstrap.php.cache';
Debug::enable();

require_once __DIR__.'/../app/AppKernel.php';

$kernel = new AppKernel('test', true);
$kernel->loadClassCache();
$request = Request::createFromGlobals();
$response = $kernel->handle($request);
$response->send();
$kernel->terminate($request, $response);


Now we can visit our app in the browser in the "test" environment by going to something like, http://myproject.dev/app_test.php/some_route. You should also create a test database to use with this new environment and configure Symfony to use it during testing.

Create the behat.yml file

It's time to create Behat's config file. Create the following file in the root of your app.

# MySymfonyProject/behat.yml
default:
    extensions:
        Behat\MinkExtension\Extension:
            default_session: symfony2
            goutte: ~
            base_url: 'http://myproject.dev/app_test.php'

        Behat\Symfony2Extension\Extension:
            mink_driver: true
            kernel:
                env: test


We've configured Behat to use the Goutte driver. Goutte is a web scraper library created by the creator of Symfony. It allows you to visit urls, click links, fill in forms, and more, all with php.

Init a bundle

Assuming you already have a bundle you're ready to test, you can initialize it for Behat testing with the following command, bin/behat --init @YourBundleName. This will create a Features directory in your bundle as well as a Features/Context/FeatureContext.php file which holds the context class. You'll want to change the FeatureContext class to extend Behat\MinkExtension\Context\MinkContext.

Now, you can start putting your .feature files in this directory. You can test the features in the bundle with the following command, bin/behat @YourBundleName.

What about javascript?

Everything we've done so far works great for pages that don't require javascript. Goutte is great for this but it has no means to test pages with javascript. To test pages with javascript you actually need a browser and something to use the browser to do your testing. This is where Selenium comes into play. In conjunction with Behat, Selenium can actually pop open a browser and commandeer it for performing your tests. It's pretty remarkable. That assumes that you've got everything installed on your host machine though because that's where your browser lives. We're using Vagrant though and Vagrant doesn't really have any sort of display for a browser. Obviously we can't use Selenium inside a Vagrant box...right? WRONG!

It's about to get exciting.

Set up the VM to run Selenium

Before we do any of this, let's update the list of available packages, sudo apt-get update.

Let's create a directory especially for Selenium, mkdir /var/selenium. cd into that directory and download Selenium with, wget http://selenium-release.storage.googleapis.com/2.40/selenium-server-standalone-2.40.0.jar.

We've got Selenium now, awesome! Except it's a .jar and we don't have java installed, not awesome. Oh well, let's get some java! sudo apt-get install openjdk-7-jre-headless.

We need a browser now and believe it or not, we can install Firefox in our VM. Do so with, sudo apt-get install firefox.

Because we're testing browser interaction and browsers require a display, we need some way to accommodate that in our VM. We can use Xvfb for this. Xvfb is a display server that performs graphical operations in memory, all without actually displaying anything. Perfect for our VM. Install it with, sudo apt-get install xvfb.

Phew...we're done installing stuff. Woo hoo!

Reconfigure

Now that we have all that pieces in place, we need to reconfigure Behat to take advantage of our new goodies. Update your behat.yml file like below:

# MySymfonyProject/behat.yml
default:
    extensions:
        Behat\MinkExtension\Extension:
            default_session: symfony2
            goutte: ~
            base_url: 'http://myproject.dev/app_test.php'
            javascript_session: selenium2
            browser_name: firefox
            selenium2: ~

        Behat\Symfony2Extension\Extension:
            mink_driver: true
            kernel:
                env: test


Create a test feature

To actually see if this works, create a route to a controller action that uses this template. All the page does is displays a button and uses jQuery to attach a click handler that when clicked, populates a div with "It works!".

Let's create a feature to test this.

# MySymfonyProject/src/YourProject/YourBundle/Features/selenium_test.feature
Feature: Selenium works
  In order to see if selenium works
  As a visitor
  I need to be able to use javascript

  @javascript
  Scenario: Testing selenium
    Given I am on "/your/route"
    And I press "Button"
    Then I should see "It works!"


Note the @javascript annotation. This, along with our new behat configuration, tells behat to use Selenium for this scenario. All scenarios without this annotation will continue to use Goutte.

Run it!

Before we run Behat, we need to start up Selenium. You can do that with DISPLAY=:1 xvfb-run java -jar /var/selenium/selenium-server-standalone-2.40.0.jar. I suggest creating an alias for this in a .bash_aliases file. You should eventually see something like below.

Mar 18, 2014 3:52:04 AM org.openqa.grid.selenium.GridLauncher main
INFO: Launching a standalone server
03:52:04.213 INFO - Java: Oracle Corporation 24.45-b08
03:52:04.214 INFO - OS: Linux 3.2.0-23-generic amd64
03:52:04.231 INFO - v2.40.0, with Core v2.40.0. Built from revision fbe29a9
03:52:04.356 INFO - Default driver org.openqa.selenium.ie.InternetExplorerDriver registration is skipped: registration capabilities Capabilities [{platform=WINDOWS, ensureCleanSession=true, browserName=internet explorer, version=}] does not match with current platform: LINUX
03:52:04.419 INFO - RemoteWebDriver instances should connect to: http://127.0.0.1:4444/wd/hub
03:52:04.421 INFO - Version Jetty/5.1.x
03:52:04.422 INFO - Started HttpContext[/selenium-server/driver,/selenium-server/driver]
03:52:04.423 INFO - Started HttpContext[/selenium-server,/selenium-server]
03:52:04.424 INFO - Started HttpContext[/,/]
03:52:27.029 INFO - Started org.openqa.jetty.jetty.servlet.ServletHandler@67db296e
03:52:27.030 INFO - Started HttpContext[/wd,/wd]
03:52:27.038 INFO - Started SocketListener on 0.0.0.0:4444
03:52:27.044 INFO - Started org.openqa.jetty.jetty.Server@709dd800


This takes a few seconds, so wait until you see that last line before moving on.

Now that Selenium is running, open up a new console tab in iTerm or PHPStorm and ssh back into your vagrant box to the root of your Symfony project.

What are you waiting for!? Run Behat! bin/behat @YourBundleName

Using javascript in a Behat test


Your tests should pass. Pat yourself on the back!

This is a super simple example, purely for testing whether or not all the pieces are working together. However, I'm sure you can dream up some scenarios where you need to be able to dynamically add some fields to your forms and test the submission, validation, creation, updating, etc. That's now possible with this setup.

Start testing what your users are actually going to use on your development/testing VM that uses the same stack that your production code will be running on. Now you can sleep even better at night! Enjoy!

Tips

You can kill selenium with Control + C in its terminal tab on a mac.

I highly recommend you peruse Sylius's Behat context classes and features. I've found them immensely useful to learn how to test javascript and use Symfony's services behind the scenes.

Tags: PHP, Advanced, Symfony

What To Return From Repositories

Repositories are swell. It's a great idea to have a central place to retrieve entities(models) from. They're even better if you're interfacing them and have different implementations, like an EloquentPersonRepository. It's awesome to hide your ORM behind an interface. Your calling code likely doesn't need to know the ORM exists...but the repositories will still return <insert-your-ORM-here> models to the client code. Isn't the client code suppose to be unaware of the ORM? It doesn't make sense for your client code to do something like, $person->siblings()->attach($brother) when attach() is an Eloquent method. What do you do then? have your repositories cast everything to arrays? convert them to instances of stdClass?

Eloquent is a very nice, clean ORM. It's super easy to work with. I think for many developers it may be their very first ORM, which is great! I think that because more than once I have seen questions come up like the following:

"Regarding repositories, the biggest issue I have with them is that their output data also needs to be abstracted in some way. It defeats the purpose if DbOrderRepository::getAll() returns an array of Eloquent objects but FileOrderRepository::getAll() returns an array of stdclass instances or associative arrays. What's the best way of preventing this? It seems like we would need framework-agnostic 'OrderDetails' and 'OrderCollection' classes but that strikes me as being overkill, and potentially a little confusing." - Laracast member

Eloquent was my introduction into ORMs and I had the exact same question. Let me illustrate a scenario with some code. I'll use the above example.

Note: I've left out some classes and interfaces in the code samples for brevity.

Our Order model

class Order extends Eloquent
{
    protected $table = 'orders';

    public function user()
    {
        return $this->belongsTo('User');
    }
}

Our Order repository interface

interface OrderRepository
{
    public function findById($id);

    public function findByOrderNumber($orderNumber);
}

A repository implementation

class EloquentOrderRepository implements OrderRepository
{
    protected $orders;

    public function __construct(Order $orders)
    {
        $this->orders = $orders;
    }

    public function findById($id)
    {
        return $this->orders->find($id);
    }

    public function findByOrderNumber($orderNumber)
    {
        return $this->orders->whereOrderNumber($orderNumber)->first();
    }
}

That's lookin good. Let's create a controller that has an action to transfer an Order to a User.

Our Controller

class OrderController extends BaseController
{
    protected $users;
    protected $orders;
    protected $mailer;

    public function __construct
    (   
        UserRepository $users, 
        OrderRepository $orders,
        OrderMailer $mailer
    )
    {
        $this->users = $users;
        $this->orders = $orders;
        $this->mailer = $mailer;
    }

    public function transferOrder($userId, $orderId)
    {
        $user = $this->users->findById($userId);
        $order = $this->orders->findById($orderId);

        $order->user()->associate($user);
        $order->save();

        $this->mailer->sendTransferNotification($user->email, $order->orderNumber);
    }
}

Do you see the problem? Part of the reason we created repositories was to hide Eloquent, yet we're using Eloquent's associate() method on an Eloquent BelongsTo object. We're also using Eloquent's magic __get() and __set() methods for the mailer. Our controller obviously knows about our ORM. Hmm...

Solution

Everyone seems to be telling you you should interface all the things, yet, somehow models have been flying under the radar. It's been causing confusion about what in the world your repositories should return.

Eloquent has made it super easy to work with models through the use of the magic methods, __get() and __set(). Interfacing your Eloquent models doesn't even enter your mind! Damn you Taylor for making it so easy! (kidding!)

If you really want to abstract the ORM, you must interface your models! You'll also want stop using the save() method on your models outside of your repositories. This is only seen in ActiveRecord implementations.

Let's refactor.

Interface all the things!

interface Order
{
    public function setUser(User $user);

    public function getUser();

    public function setOrderNumber($orderNumber);

    public function getOrderNumber();
}
interface OrderRepository
{
    public function findById($id);

    public function findByOrderNumber($orderNumber);

    public function save(Order $order);
}
class EloquentOrder extends Eloquent implements Order
{
    protected $table = 'orders';

    public function setUser(User $user)
    {
        $this->user()->associate($user);
    }

    public function getUser()
    {
        return $this->user;
    }

    public function setOrderNumber($orderNumber)
    {
        $this->orderNumber = $orderNumber;
    }

    public function getOrderNumber()
    {
        return $this->orderNumber;
    }

    private function user()
    {
        return $this->belongsTo('User');
    }
}

Our new controller

class OrderController extends BaseController
{
    protected $users;
    protected $orders;
    protected $dispatcher

    public function __construct (   
        UserRepository $users, 
        OrderRepository $orders,
        Dispatcher $dispatcher
    ) {
        $this->users = $users;
        $this->orders = $orders;
        $this->dispatcher = $dispatcher;
    }

    public function transferOrder($userId, $orderId)
    {
        $user = $this->users->findById($userId);
        $order = $this->orders->findById($orderId);

        $order->setUser($user);

        $this->orders->save($order);

        $this->dispatcher->fire(OrderEvents::ORDER_TRANSFERED, [$user, $order]);
    }
}

Now our controller really has no idea about Eloquent or any ORM. Interfacing models gives us a couple of nice benefits apart from abstraction. Mocking models in tests becomes trivial and if you're using an IDE, like PHPStorm, you get some super helpful code completion! You won't always have to remember if you need to attach() or associate() a model to another model. Hide that stuff behind your interfaced method.

You might also have noticed that we made the EloquentOrder::user() method private. This is Eloquent related. Your client code can't use it anymore! That's good.

You probably also noticed I added a dispatcher and am firing an event in the controller. You should probably use the mailer in an event listener, which can make use of your new User and Order interfaces.

Considerations

Things aren't all rosy though. Sometimes things get a bit awkward by accommodating both ORM implementations. For example, if you went the other way and did $user->addOrder($order). In Eloquent that would update and save the model right away, not so in Doctrine. You would still need to use the UserRepository::save() method for that change to actually get sent to the database. This is a price you pay by accommodating both ORM implementations sometimes. I suggest treating relationships like normal model attribute and always using the repositories to persist the changes. Eloquent is smart enough to know if the model is actually dirty and whether or not to perform any database calls. For example:

class OrderController extends BaseController
{
    // other code

    public function transferOrder($userId, $orderId)
    {
        $user = $this->users->findById($userId);
        $order = $this->orders->findById($orderId);

        $user->addOrder($order);

        $this->users->save($user); // In EloquentUserRepository, only save if $user->isDirty is true

        $this->dispatcher->fire(OrderEvents::ORDER_TRANSFERED, [$user, $order]);
    }
}

This brings up my next point...getting to a point where you can swap out your ORM and not alter your client code is a challenge. You have to ditch the ActiveRecord mentality.

Take note! You can treat ActiveRecord models like data mapper models, you can't treat data mapper models like ActiveRecord models though. It just doesn't work.

Is all this necessary to write "good" code? Nope. It all depends on your app and priorities. Do you think you may want to swap in Doctrine2, a data mapper implementation? If so, I'd encourage you to interface your models. If you're fairly confident you won't need to swap out Eloquent, don't bother unless you like the code completion(I do!) and/or testing benefits.

It's tough and unless you're familiar with multiple ORMs you'll probably have some work to do if/when you swap ORMs.

The best display of this I've found is the FOSUserBundle for Symfony. It can be used with both Propel(an ActiveRecord implementation) and Doctrine2(a data mapper implementation). Check out the Models, Propel, and Doctrine directories to see what I mean.

All that said, you must decide for yourself if this abstraction is worth it for your project. It not always is!

Tags: PHP, Intermediate, Laravel