Deploying Kanboard.

Kanboard is a self-hosted pseudo Trello clone that supports a lot of interesting features, most notable for me is Mattermost integration and ActiveDirectory / LDAP integration.

In this post we’re going to walkthrough my process of setting up Kanboard on a debian 9 Google Cloud Compute Engine instance, but this should work on most debian VMs. It assumes your VM can talk to your AD controller and you have an incoming webhook setup for Mattermost.

First we want to install Apache and PHP by running the following commands.

#check for updates 
apt update
#go get apache and a whole bunch of PHP stuff.
apt install -y apache2 libapache2-mod-php7.0 php7.0-cli php7.0-mbstring \
    php7.0-sqlite3 php7.0-opcache php7.0-json php7.0-mysql php7.0-pgsql \
    php7.0-ldap php7.0-gd php7.0-xml
#enable drivers for postgres
systemctl enable apache2 postgresql

Next we’ll go out and grab Kanboard 1.2.5, unpack it and move it to the appropriate directory so Apache can serve it, then we’ll adjust the permissions for the “data” folder so that Kanboard can actually do stuff.

Run the following command.

#specific version you want
version=1.2.5
#download that version
wget https://github.com/kanboard/kanboard/archive/v$version.tar.gz
#unpack to /var/www
tar xzvf v$version.tar.gz -C /var/www/
#fix /data permissions
chown -R www-data:www-data /var/www/kanboard-$version/data

After a first install of software I like to reboot the box just to be sure Apache auto starts and the services have come up in the appropriate order, this is an optional step, and really a training scar from working on so many windows deployments.

Next is to actually log into your Kanboard isntallation, by default it’s going to be at http://yourserveraddressorip/kanboard
Login using username: admin password: admin
immediately go and change this password.

Navigate to the plugins screen, if you get a warning about not being able to install plugins from the web interface, you’ll need to edit the config.php file in /var/www/kanboard to reflect

//Enable/Disable plugin installer
define ('PLUGIN_INSTALLER', true);

After making that change, if you still receive the warning you’re likely missing the php-zip module, installing that is outside the scope of this post.

While we’re in the config.php file, we’re going to go ahead and make the changes necessary for AD/LDAP authentication.
We’re going to use proxy mode.

// Enable LDAP authentication (false by default)
define('LDAP_AUTH', true);
//Tell it what kind of ldap and how to connect
define('LDAP_BIND_TYPE', 'proxy'); 
define('LDAP_USERNAME', 'administrator@yourdomain.local'); define('LDAP_PASSWORD', 'this accounts domain password');
// LDAP server hostname 
define('LDAP_SERVER', 'hostname.yourdomain.local');
// LDAP properties 
define('LDAP_USER_BASE_DN', 'CN=Users,DC=yourdomain,DC=local'); 
define('LDAP_USER_FILTER', '(&(objectClass=user)(sAMAccountName=%s))');

Ldap properties can be confusing, the user filter should be pretty much left alone for Active directory, the LDAP_USER_BASE_DN is going to be variable depending on your folder structure in AD.

Save those changes to config.php and consider restarting apache to make sure the app has loaded the new config.

Next we’re going to go back to the plugins screen and install the Mattermost plugin, this plugin is particularly useful to my company because we’re going to use Kanboard as an almost CRM tool, with team time notifications when a task is moved from one point in the pipeline to another, currently we use email for this and it’s a communication nightmare.

Basically you click install, and then give it your mattermost webhook.

Once it’s installed, per project you can pick a Mattermost channel you’d like Kanboard to post in by giving it the channel id in the notifications section of the project configuration.

That’s it! we’re done! you have a brand new Kanboard deployment, using LDAP so your user base doesn’t have to remember any new credentials, and with Mattermost integration to help better gel together your project management and communication tools.

Getting out of the datacenter business

This year I really started to feel the effects of service-creep in my company.

We’re a relatively small organization with a very high customer and workload volume / ratio, and due to the industry we work operate in (Transportation regulation and logistics) there are no “off the shelf” line of business solutions we can buy, so we make, eat, and serve our own dog food.

Because of that, historically we’ve hosted our own solutions and our canned user tools in the building.

  • Hypervisors
    • Replication, backups, networking
    • VDI
  • Databases – Production and Development builds
    • SQL Server
    • MySQl
    • Postgres
  • 3cx Phone System
  • Web Servers – Production and Development builds
    • IIS
    • Apache2
    • Nginx
  • Application Servers
    • Active Directory
    • Sharepoint
    • Mattermost
    • Exchange
  • File Servers

As a day to day business requirement, all of this needs to run, be performant, security hardened, and highly available. All of this needs to be done by the same engineer, while also managing the software development team, OH! and also he’s authored / co authored several of the custom applications and needs to be regularly patching and sniff-testing all that code.

You either scale in, or out.

Around June this year, I hit the ceiling of my silo. It’s hard for engineers to ask for help sometimes, I think. Especially when they’ve been “the guy” for years. During some internal meetings between the technical team and operations team we started discussing what the future of the company looks like. The normal stuff was argued over, payroll, operating cost, uptime, etc.

A very pointed question was asked – “Is it time for another Jimmy?”

I didn’t have an answer.

So like any good modern IT professional, I Googled the question. I read a lot of case studies and best practices for scaling your internal IT team, topics ranged from interview questions, credential sharing tools, and documentation. All of it seemed inappropriate for the size company we are, frankly, all of it seemed prohibitively expensive.

If you don’t like the answer, ask a different question.

What would it look like if I scaled down my duties, instead of scaling up my team? How do companies operate as huge scale, do they just have huge teams? They leverage other companies expertise, and more importantly for my use-case, other companies payrolls.

As it turns out, the cloud is not just for massive health care enterprises and tech startups. You can pretty easily “lift and shift” most of your applications with the right training and planning.

We don’t make money managing servers, maybe we should stop managing servers.

Pick a cloud, any cloud.

Evaluating your cloud partner is going to be a mix of cost analysis, internal needs, internal SLOs and in some cases even personal brand preferences. That’s a bit out of scope for this post so to skip ahead a little, we settled on Google Cloud.

The double bubble.

When you first start your migration skyward, its important you plan for a bit of double paying, initially you’ll still have all your local resources available to host your services and that’s going to make it feel like you’re paying twice for the same service. You kind of are, you’ll likely have same power requirements, the same maintenance tasks, and same network requirements locally.

This changes as your migration continues forward, the more you lift and shift, the less you have to manage, the less you have to manage, the more stuff you can turn off onsite. This was and is supremely motivating for me. I’m already noticing changes in my workday, I’m finding time to get caught up on documentation, and whole hours of the day where I’m doing nothing but planning the next application move.

Hybrid roles for hybrid futures.

One of the first services I turned on was a site-to-site VPN interconnect between Google and my building, we’re going to have to be a hybrid deployment for the foreseeable future, running most of our applications in the cloud and running our Active Directory and file servers on premisses. This also lets me decouple the front end services from the backend services, and lift them independently of each other.

The next elephant I am going to start eating is moving SQL server to CloudSQL, the MSSQL offering supposedly comes out of beta in 2020. Being able to get out of SQL Server administration will be a dream, especially considering our DB size is over 100gb.

Much like the pursuit of uptime, reclaiming my workweek is an on going project, it will probably never be perfect.
I will continue to update on my companies progress but I can’t see a future where we come back to on prem, the peace of mind alone has been huge for us.

Setting up Let’sEncrypt and ConnectWise ScreenConnect

As a side effect of recently becoming a Google Certified Associate Cloud Engineer I’ve been making an effort to migrate and consolidate all of my companies various web services into one space.

I couldn’t find a tutorial on how to do this, so, today we’re going to talk about ConnectWise Control or ScreenConnect.

Installing ScreenConnect is pretty simple ConnectWise offers basically fire and forget scripts to do so, installation of ConnectWise is outside of scope of this tutorial.
For our example our instance is running Debian 9, running in Google Compute Engine with HTTP and HTTPS traffic enabled.

  • Step 1 Install ScreenConnect on Debian.
  • Step 2 setup your DNS records to resolve something.yourdomain.com to your new ScreenConnect install.

As a note something.yourdomain.com will not actually run ScreenConnect you can find/use your install at something.yourdomain.com:8040 by default.

Here’s where the magic happens, we’re going to use nginx to behave as a proxy for something.yourdomain.com:8040

  • Step 3 install certbot, certbots nginx tools and nginx

This is done by running the following commands

sudo apt install certbot
sudo apt install python-certbot-nginx
sudo apt install nginx
  • Step 4, adjust nginx’s default site, we’re going to delete it, add a new blank one, and then paste in our config.

To do this run the following commands

sudo rm /etc/nginx/sites-available/default
sudo touch /etc/nginx/sites-available/default
sudo nano /etc/nginx/sites-available/default

Nano will open an in terminal text editor, paste the following script, replacing something.yourdomain.com with what your actual DNS records are.

server {
  listen 443 ssl default_server;
  server_name something.yourdomain.com;
  server_tokens off;
  ssl          on;
    ssl_certificate /etc/letsencrypt/live/something.yourdomain.com/fullchain.pem; 
    ssl_certificate_key /etc/letsencrypt/live/something.yourdomain.com/privkey.pem; 

    location / {
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header Host $http_host;
      proxy_set_header X-NginX-Proxy true;
      proxy_pass http://127.0.0.1:8040;
      proxy_redirect off;
  }
}

Pressing CTRL+X and then Y when prompted will exit nano and save your changes.
Once your default file is updated, restart the nginx service by running

sudo systemctl reload nginx

The last step is to tell certbot to go out, get an SSL from Let’s Encrypt and set it up for something.yourdomain.com

sudo certbot --nginx -d something.yourdomain.com

That’s it! All done, navigate to something.yourdomain.com to see a fully functioning screen connect setup with SSL.