I've previously covered why I use Ubuntu, even at my day job where I have a Windows laptop I spend most of my time using Ubuntu.  This is through virtual machines, but there's method to my madness.

  1. It slows things down
  2. It lets you test service failures
  3. It lets you experiment

It slows things down

This is the oddest one of all of the reasons.  Surely you want things to run as fast as possible, to get more done?! Well yes, but there's a trade-off.  If you're developing in an environment where the database and code are on the same machine, communication between them is going to be near-instant.  Great, but that isn't (or shouldn't be) how things run in production.

Your production environment is likely to have a server for code, a server for database, a server for cache, and several other servers and services. These aren't all on the same server.  Some of the time they might not even be in the same city.

By running those different services on a separate virtual machine, you get to introduce some network delay (albeit small if they are all on one host), as the request has to go out to a physical or virtual switch, and back to fulfil the request, then return to the origin.

This delay will allow you to get an idea of what delays are added as part of the networking in a real data centre, and give response times closer to what users might expect.  It will also mean that, come demo time with management, they are going to see some delays, however slight, as things make their way through a "network".  This sets their expectations closer to reality for when things go live.

It lets you test service failures

Yes, this can be done on a single machine, but by having separate machines for each, you can take one down and see how your application reacts.  I'll use a Redis cluster as an example.  If you have multiple machines in production which can be used, you can't test how things react if a single node fails when you're only running a single instance on a machine.

Yes, you can stop the service so it can't communicate, but you can't check a failover as easily.  If your application communicates with different instances of Redis, you can't fail one without failing the other at the same time when it's all one service running locally.

It lets you experiment

Ever wondered what those performance tuning techniques might actually do for you?  Or whether the latest update or release might benefit you?  What about something which you have to replace because it's no longer supported?

Running these all on one machine can cause issues when updating, especially if you need to support a live configuration as well as a development environment with different versions of services.

Put these into separate virtual machines and you can swap them out, change them, upgrade them, reconfigure them, and generally check things over without breaking the stability of your machine.

Want to try some configuration changes? Clone the service, apply the changes and test.  Doesn't work, bin the clone and try again.  you don't lose the stability you had, as you can just spin that up again and continue.  Need to upgrade a major version of something? Create a new machine or upgrade a clone of an existing one.  You can easily bin the results if they don't work.

Ultimately, working this way gives you a level of security and safety to try multiple things.  It's like having a data centre on your own machine (assuming you have sufficient resource to do so).

What about containers?

The same ideas hold.  You can run containers with lower overheads than several virtual machines.  The principle is the same.  You get the ability to swap components in and out, without having to worry about breaking your whole system and working ability.  It makes things easy to roll back to a stable configuration.

Ideally, any development environment will be as close to a production environment as possible.  Doing so will save a lot of headaches come actual deployment to either test or production environments.