Getting out of the datacenter business

This year I really started to feel the effects of service-creep in my company.

We’re a relatively small organization with a very high customer and workload volume / ratio, and due to the industry we work operate in (Transportation regulation and logistics) there are no “off the shelf” line of business solutions we can buy, so we make, eat, and serve our own dog food.

Because of that, historically we’ve hosted our own solutions and our canned user tools in the building.

  • Hypervisors
    • Replication, backups, networking
    • VDI
  • Databases – Production and Development builds
    • SQL Server
    • MySQl
    • Postgres
  • 3cx Phone System
  • Web Servers – Production and Development builds
    • IIS
    • Apache2
    • Nginx
  • Application Servers
    • Active Directory
    • Sharepoint
    • Mattermost
    • Exchange
  • File Servers

As a day to day business requirement, all of this needs to run, be performant, security hardened, and highly available. All of this needs to be done by the same engineer, while also managing the software development team, OH! and also he’s authored / co authored several of the custom applications and needs to be regularly patching and sniff-testing all that code.

You either scale in, or out.

Around June this year, I hit the ceiling of my silo. It’s hard for engineers to ask for help sometimes, I think. Especially when they’ve been “the guy” for years. During some internal meetings between the technical team and operations team we started discussing what the future of the company looks like. The normal stuff was argued over, payroll, operating cost, uptime, etc.

A very pointed question was asked – “Is it time for another Jimmy?”

I didn’t have an answer.

So like any good modern IT professional, I Googled the question. I read a lot of case studies and best practices for scaling your internal IT team, topics ranged from interview questions, credential sharing tools, and documentation. All of it seemed inappropriate for the size company we are, frankly, all of it seemed prohibitively expensive.

If you don’t like the answer, ask a different question.

What would it look like if I scaled down my duties, instead of scaling up my team? How do companies operate as huge scale, do they just have huge teams? They leverage other companies expertise, and more importantly for my use-case, other companies payrolls.

As it turns out, the cloud is not just for massive health care enterprises and tech startups. You can pretty easily “lift and shift” most of your applications with the right training and planning.

We don’t make money managing servers, maybe we should stop managing servers.

Pick a cloud, any cloud.

Evaluating your cloud partner is going to be a mix of cost analysis, internal needs, internal SLOs and in some cases even personal brand preferences. That’s a bit out of scope for this post so to skip ahead a little, we settled on Google Cloud.

The double bubble.

When you first start your migration skyward, its important you plan for a bit of double paying, initially you’ll still have all your local resources available to host your services and that’s going to make it feel like you’re paying twice for the same service. You kind of are, you’ll likely have same power requirements, the same maintenance tasks, and same network requirements locally.

This changes as your migration continues forward, the more you lift and shift, the less you have to manage, the less you have to manage, the more stuff you can turn off onsite. This was and is supremely motivating for me. I’m already noticing changes in my workday, I’m finding time to get caught up on documentation, and whole hours of the day where I’m doing nothing but planning the next application move.

Hybrid roles for hybrid futures.

One of the first services I turned on was a site-to-site VPN interconnect between Google and my building, we’re going to have to be a hybrid deployment for the foreseeable future, running most of our applications in the cloud and running our Active Directory and file servers on premisses. This also lets me decouple the front end services from the backend services, and lift them independently of each other.

The next elephant I am going to start eating is moving SQL server to CloudSQL, the MSSQL offering supposedly comes out of beta in 2020. Being able to get out of SQL Server administration will be a dream, especially considering our DB size is over 100gb.

Much like the pursuit of uptime, reclaiming my workweek is an on going project, it will probably never be perfect.
I will continue to update on my companies progress but I can’t see a future where we come back to on prem, the peace of mind alone has been huge for us.

Setting up Let’sEncrypt and ConnectWise ScreenConnect

As a side effect of recently becoming a Google Certified Associate Cloud Engineer I’ve been making an effort to migrate and consolidate all of my companies various web services into one space.

I couldn’t find a tutorial on how to do this, so, today we’re going to talk about ConnectWise Control or ScreenConnect.

Installing ScreenConnect is pretty simple ConnectWise offers basically fire and forget scripts to do so, installation of ConnectWise is outside of scope of this tutorial.
For our example our instance is running Debian 9, running in Google Compute Engine with HTTP and HTTPS traffic enabled.

  • Step 1 Install ScreenConnect on Debian.
  • Step 2 setup your DNS records to resolve something.yourdomain.com to your new ScreenConnect install.

As a note something.yourdomain.com will not actually run ScreenConnect you can find/use your install at something.yourdomain.com:8040 by default.

Here’s where the magic happens, we’re going to use nginx to behave as a proxy for something.yourdomain.com:8040

  • Step 3 install certbot, certbots nginx tools and nginx

This is done by running the following commands

sudo apt install certbot
sudo apt install python-certbot-nginx
sudo apt install nginx
  • Step 4, adjust nginx’s default site, we’re going to delete it, add a new blank one, and then paste in our config.

To do this run the following commands

sudo rm /etc/nginx/sites-available/default
sudo touch /etc/nginx/sites-available/default
sudo nano /etc/nginx/sites-available/default

Nano will open an in terminal text editor, paste the following script, replacing something.yourdomain.com with what your actual DNS records are.

server {
  listen 443 ssl default_server;
  server_name something.yourdomain.com;
  server_tokens off;
  ssl          on;
    ssl_certificate /etc/letsencrypt/live/something.yourdomain.com/fullchain.pem; 
    ssl_certificate_key /etc/letsencrypt/live/something.yourdomain.com/privkey.pem; 

    location / {
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header Host $http_host;
      proxy_set_header X-NginX-Proxy true;
      proxy_pass http://127.0.0.1:8040;
      proxy_redirect off;
  }
}

Pressing CTRL+X and then Y when prompted will exit nano and save your changes.
Once your default file is updated, restart the nginx service by running

sudo systemctl reload nginx

The last step is to tell certbot to go out, get an SSL from Let’s Encrypt and set it up for something.yourdomain.com

sudo certbot --nginx -d something.yourdomain.com

That’s it! All done, navigate to something.yourdomain.com to see a fully functioning screen connect setup with SSL.