Deploying Kanboard.

Kanboard is a self-hosted pseudo Trello clone that supports a lot of interesting features, most notable for me is Mattermost integration and ActiveDirectory / LDAP integration.

In this post we’re going to walkthrough my process of setting up Kanboard on a debian 9 Google Cloud Compute Engine instance, but this should work on most debian VMs. It assumes your VM can talk to your AD controller and you have an incoming webhook setup for Mattermost.

First we want to install Apache and PHP by running the following commands.

#check for updates 
apt update
#go get apache and a whole bunch of PHP stuff.
apt install -y apache2 libapache2-mod-php7.0 php7.0-cli php7.0-mbstring \
    php7.0-sqlite3 php7.0-opcache php7.0-json php7.0-mysql php7.0-pgsql \
    php7.0-ldap php7.0-gd php7.0-xml
#enable drivers for postgres
systemctl enable apache2 postgresql

Next we’ll go out and grab Kanboard 1.2.5, unpack it and move it to the appropriate directory so Apache can serve it, then we’ll adjust the permissions for the “data” folder so that Kanboard can actually do stuff.

Run the following command.

#specific version you want
version=1.2.5
#download that version
wget https://github.com/kanboard/kanboard/archive/v$version.tar.gz
#unpack to /var/www
tar xzvf v$version.tar.gz -C /var/www/
#fix /data permissions
chown -R www-data:www-data /var/www/kanboard-$version/data

After a first install of software I like to reboot the box just to be sure Apache auto starts and the services have come up in the appropriate order, this is an optional step, and really a training scar from working on so many windows deployments.

Next is to actually log into your Kanboard isntallation, by default it’s going to be at http://yourserveraddressorip/kanboard
Login using username: admin password: admin
immediately go and change this password.

Navigate to the plugins screen, if you get a warning about not being able to install plugins from the web interface, you’ll need to edit the config.php file in /var/www/kanboard to reflect

//Enable/Disable plugin installer
define ('PLUGIN_INSTALLER', true);

After making that change, if you still receive the warning you’re likely missing the php-zip module, installing that is outside the scope of this post.

While we’re in the config.php file, we’re going to go ahead and make the changes necessary for AD/LDAP authentication.
We’re going to use proxy mode.

// Enable LDAP authentication (false by default)
define('LDAP_AUTH', true);
//Tell it what kind of ldap and how to connect
define('LDAP_BIND_TYPE', 'proxy'); 
define('LDAP_USERNAME', 'administrator@yourdomain.local'); define('LDAP_PASSWORD', 'this accounts domain password');
// LDAP server hostname 
define('LDAP_SERVER', 'hostname.yourdomain.local');
// LDAP properties 
define('LDAP_USER_BASE_DN', 'CN=Users,DC=yourdomain,DC=local'); 
define('LDAP_USER_FILTER', '(&(objectClass=user)(sAMAccountName=%s))');

Ldap properties can be confusing, the user filter should be pretty much left alone for Active directory, the LDAP_USER_BASE_DN is going to be variable depending on your folder structure in AD.

Save those changes to config.php and consider restarting apache to make sure the app has loaded the new config.

Next we’re going to go back to the plugins screen and install the Mattermost plugin, this plugin is particularly useful to my company because we’re going to use Kanboard as an almost CRM tool, with team time notifications when a task is moved from one point in the pipeline to another, currently we use email for this and it’s a communication nightmare.

Basically you click install, and then give it your mattermost webhook.

Once it’s installed, per project you can pick a Mattermost channel you’d like Kanboard to post in by giving it the channel id in the notifications section of the project configuration.

That’s it! we’re done! you have a brand new Kanboard deployment, using LDAP so your user base doesn’t have to remember any new credentials, and with Mattermost integration to help better gel together your project management and communication tools.

Getting out of the datacenter business

This year I really started to feel the effects of service-creep in my company.

We’re a relatively small organization with a very high customer and workload volume / ratio, and due to the industry we work operate in (Transportation regulation and logistics) there are no “off the shelf” line of business solutions we can buy, so we make, eat, and serve our own dog food.

Because of that, historically we’ve hosted our own solutions and our canned user tools in the building.

  • Hypervisors
    • Replication, backups, networking
    • VDI
  • Databases – Production and Development builds
    • SQL Server
    • MySQl
    • Postgres
  • 3cx Phone System
  • Web Servers – Production and Development builds
    • IIS
    • Apache2
    • Nginx
  • Application Servers
    • Active Directory
    • Sharepoint
    • Mattermost
    • Exchange
  • File Servers

As a day to day business requirement, all of this needs to run, be performant, security hardened, and highly available. All of this needs to be done by the same engineer, while also managing the software development team, OH! and also he’s authored / co authored several of the custom applications and needs to be regularly patching and sniff-testing all that code.

You either scale in, or out.

Around June this year, I hit the ceiling of my silo. It’s hard for engineers to ask for help sometimes, I think. Especially when they’ve been “the guy” for years. During some internal meetings between the technical team and operations team we started discussing what the future of the company looks like. The normal stuff was argued over, payroll, operating cost, uptime, etc.

A very pointed question was asked – “Is it time for another Jimmy?”

I didn’t have an answer.

So like any good modern IT professional, I Googled the question. I read a lot of case studies and best practices for scaling your internal IT team, topics ranged from interview questions, credential sharing tools, and documentation. All of it seemed inappropriate for the size company we are, frankly, all of it seemed prohibitively expensive.

If you don’t like the answer, ask a different question.

What would it look like if I scaled down my duties, instead of scaling up my team? How do companies operate as huge scale, do they just have huge teams? They leverage other companies expertise, and more importantly for my use-case, other companies payrolls.

As it turns out, the cloud is not just for massive health care enterprises and tech startups. You can pretty easily “lift and shift” most of your applications with the right training and planning.

We don’t make money managing servers, maybe we should stop managing servers.

Pick a cloud, any cloud.

Evaluating your cloud partner is going to be a mix of cost analysis, internal needs, internal SLOs and in some cases even personal brand preferences. That’s a bit out of scope for this post so to skip ahead a little, we settled on Google Cloud.

The double bubble.

When you first start your migration skyward, its important you plan for a bit of double paying, initially you’ll still have all your local resources available to host your services and that’s going to make it feel like you’re paying twice for the same service. You kind of are, you’ll likely have same power requirements, the same maintenance tasks, and same network requirements locally.

This changes as your migration continues forward, the more you lift and shift, the less you have to manage, the less you have to manage, the more stuff you can turn off onsite. This was and is supremely motivating for me. I’m already noticing changes in my workday, I’m finding time to get caught up on documentation, and whole hours of the day where I’m doing nothing but planning the next application move.

Hybrid roles for hybrid futures.

One of the first services I turned on was a site-to-site VPN interconnect between Google and my building, we’re going to have to be a hybrid deployment for the foreseeable future, running most of our applications in the cloud and running our Active Directory and file servers on premisses. This also lets me decouple the front end services from the backend services, and lift them independently of each other.

The next elephant I am going to start eating is moving SQL server to CloudSQL, the MSSQL offering supposedly comes out of beta in 2020. Being able to get out of SQL Server administration will be a dream, especially considering our DB size is over 100gb.

Much like the pursuit of uptime, reclaiming my workweek is an on going project, it will probably never be perfect.
I will continue to update on my companies progress but I can’t see a future where we come back to on prem, the peace of mind alone has been huge for us.

Setting up Let’sEncrypt and ConnectWise ScreenConnect

As a side effect of recently becoming a Google Certified Associate Cloud Engineer I’ve been making an effort to migrate and consolidate all of my companies various web services into one space.

I couldn’t find a tutorial on how to do this, so, today we’re going to talk about ConnectWise Control or ScreenConnect.

Installing ScreenConnect is pretty simple ConnectWise offers basically fire and forget scripts to do so, installation of ConnectWise is outside of scope of this tutorial.
For our example our instance is running Debian 9, running in Google Compute Engine with HTTP and HTTPS traffic enabled.

  • Step 1 Install ScreenConnect on Debian.
  • Step 2 setup your DNS records to resolve something.yourdomain.com to your new ScreenConnect install.

As a note something.yourdomain.com will not actually run ScreenConnect you can find/use your install at something.yourdomain.com:8040 by default.

Here’s where the magic happens, we’re going to use nginx to behave as a proxy for something.yourdomain.com:8040

  • Step 3 install certbot, certbots nginx tools and nginx

This is done by running the following commands

sudo apt install certbot
sudo apt install python-certbot-nginx
sudo apt install nginx
  • Step 4, adjust nginx’s default site, we’re going to delete it, add a new blank one, and then paste in our config.

To do this run the following commands

sudo rm /etc/nginx/sites-available/default
sudo touch /etc/nginx/sites-available/default
sudo nano /etc/nginx/sites-available/default

Nano will open an in terminal text editor, paste the following script, replacing something.yourdomain.com with what your actual DNS records are.

server {
  listen 443 ssl default_server;
  server_name something.yourdomain.com;
  server_tokens off;
  ssl          on;
    ssl_certificate /etc/letsencrypt/live/something.yourdomain.com/fullchain.pem; 
    ssl_certificate_key /etc/letsencrypt/live/something.yourdomain.com/privkey.pem; 

    location / {
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header Host $http_host;
      proxy_set_header X-NginX-Proxy true;
      proxy_pass http://127.0.0.1:8040;
      proxy_redirect off;
  }
}

Pressing CTRL+X and then Y when prompted will exit nano and save your changes.
Once your default file is updated, restart the nginx service by running

sudo systemctl reload nginx

The last step is to tell certbot to go out, get an SSL from Let’s Encrypt and set it up for something.yourdomain.com

sudo certbot --nginx -d something.yourdomain.com

That’s it! All done, navigate to something.yourdomain.com to see a fully functioning screen connect setup with SSL.

YunoHost might be the spiritual successor to Microsoft Small Business Server.

Microsoft Small Business Server (SBS) was the product everyone needed in the early 2000’s and earlier. It was their “Business in a box” offering that provided every service a small business (under 50 users) could want. Email, Calendar, File Sharing, Directory Services, DNS, DHCP, Webhosting. It was truly the go-to deployment for most companies.

I administered 3 different versions of SBS, and did a fresh deployment of 1 before Microsoft killed it off for their new hybrid-cloud solution of Microsoft Server Essentials paired with Office 365.

To be clear, SBS was not a perfect product, but in my experience, was a fine, affordable, one-time expense solution that worked for a great number of companies in my area.

So, the SBS pitch thesis went something like this, “I can give you the ability to login anywhere in your office, have professional email using your .com name, share files between computers, and all of this for a low entry cost of (WhateverTheBoxCosts) Dollars.

Easy to sell, you either want the stuff, or you don’t.

The Microsoft Essentials Pitch now goes something like “I can give you the ability to log in anywhere in your office, and share files between your computers for a low entry cost of (WhateverTheBoxCosts) Dollars, and if you want professional email and cloud storage and it’s between 8 and 15 dollars per month, per user, forever.

Less appealing right? You have a significant upfront cost, and then now I have also baked in overhead forever, that only swells as your company grows.

Enter YunoHost

YunoHost describes itself as “a server operating system aiming to make self-hosting accessible to as many people as possible, without taking away from the quality and reliability of the software.”

In practice, at least for my use case, YunoHost was an incredibly easy to deploy, rock solid Email, cloud storage, and web hosting platform. In fact you’re reading on a YunoHost box right now!

Deployment is very easy, they have well written guides for deploying on hobbyist platforms like Raspberry Pi, Certain ARM boards or business class Debian linux servers.

Installing on a Debian VPS can be as easy as 1 command, and then filling in forms as prompted.

curl https://install.yunohost.org | bash

That’s it, really. This platform is dead simple to deploy and would be easy to manage as any other Debian based appliance. This is the new business in a box. It supports SSO via LDAP, IMAP mail, two web mail clients, DokuWiki, NextCloud, XMPP based chat services, WordPress.

This is a platform I would like to see more clients use, here’s the pitch.

“I can give you the ability to log in using 1 set of credentials (directory services), regardless of what computer or device you’re at. You can have as many professional email addresses and users as you have storage for, cloud storage under the same limitation, group messaging, and total control of your data, all for the price of the hardware it runs on.”

Compelling, right?

Moving to XCP-ng in my lab, possibly the datacenter.

I’d like to start with the preface; I have been a Hyper-V advocate/evangelist since 2008R2. In my main job, all of our infrastructure runs on Hyper-V server 2016, we’re a full Microsoft stack house, IIS, SQL Server, AD the whole whack. I have been drinking the windows koolaid since I started my IT career. Until very recently it had been my experience that open-source did great on the web, and that’s where it stayed for me, my clients, and my company.

In my testing/toying with XCP-ng, I am finding it very difficult to make a strong case for choosing Hyper-V over XCP-ng. In some ways they have feature parity, in others I’m finding XCP-ng to be superior.

How did we get here?

It started with a video from Lawrence Systems / PC Pickup. From my understanding, they are an IT consultancy in the Detroit area and their technical skillset focuses strongly on open source deployments for core infrastructure (think pfSense, freenas, etc.).

Unsurprisingly they have several videos and tutorials evangelizing XCP-ng and made a compelling enough case for me to deploy it on my homelab blow-up/tear-down/testing cluster.

First Impressions

Simple Installer

Unlike some linux based installers, XCP-ng has a very minimal, easy to understand and well documented install process. For me it was a matter of pushing enter a few times and letting it do it’s thing.

Lower OS Overhead

This, honestly was expected but because XCP-ng is linux based the core os runs very lean. Hyper-V suggests a minimum of 4GB RAM for the host, and then whatever else you’d want to allocate for the guests. XCP-ng suggests a minimum of 2GB per host, but in my uses I’ve never seen it use more than 700mb.

Management Options

In Hyper-V you basically need a Windows 10 client with RSAT tools and joined to the same domain as your host to manage your hosts remotely. I understand that it is technically possible to do this without joining a domain but it’s not practically sound or really scaleable.

With XCP-ng I was able to manage the hosts individually or together in a single interface using a tool that is packaged with the hyper visor called “XCP-ng Center” similar to Citrix’s Xen Center. This tool allows for individual user logins, per host, independent of your local machines credentials.

Additionally you can install a wonderful open source tool from the maintainers of XCP-ng called Xen Orchestra. You can either try the paid, pre compiled appliance by running a few shell commands on a XCP-ng host of your choice, downloading it and importing it via XCP-ng Center, or compiling it from source on your own for a Libre experience. Deploying XO is really where a XCP-ng deployment starts to shine, you can think of it as a similar appliance like the Unifi Controller software, it allows for centralized, web based management of all of your hosts, and a ton more features for your private cloud like user-based resource allocation, allowing users to self service deploy VMs on your hosts without IT intervention.

Storage Flexibility

Initially I deployed it using local SR’s. After I got comfortable using XenMotion and some of the other features, I wanted to start using XCP-ng’s version of clustering, “pooling”. To do this you need shared storage between all of your hosts, and this is where another very delightful discovery was made for me.

XCP-ng supports using SMB shares as SR’s for VDIs. I.E. You can use any typical SMB fileserver as shared storage between hosts. I understand that there are much more performant file systems for this, but the availability of this feature makes XCP-ng almost a no-brainer for home labbers. Of course XCP-ng supports NFS and ISCSI for shared storage like you’d expect to for an actual production environment.

Final Thoughts

I think in short, XCP-ng has really changed the way I look at my current virtualization deployments, and moving forward with smaller clients I think I’m going to run XCP-ng instead of hyper-v purely for the web management availability and lower overhead.

I want to deploy it on my core setup here at work, but we already have existing infrastructure to support Hyper-V and I can’t really validate making a change for change’s sake…yet.

A Mullet Deployment

Windows in the front, Linux in the back.

I’ve been working on a pretty interesting environment and I thought you guys might like to hear about it, I would also love to hear what you have to think in the comments! I’m contracting with a non-profit charity organization that is just getting started. Currently there are three users including the founder, they each have their own personal laptops 2 of them running Windows 7 and one of them running Windows 8.1 . They have the pretty standard office needs and they contacted me from a referral to see what I could do for them on their budget (which is tight).

After meeting with the founder already we hit our first snag, she’s very cloudphobic, borderline fanatical about the fact that she want’s to control all of the organizations data in house. That struck me as odd, but hey, every office is different right? Our only other challenges are that the budget really does not allow for nice hardware, and they are still pending for 501c status. What that translates to is we are going to have a hard time getting equipment.

From the discovery meeting I learned that this organization requires:

  • Active Directory
  • Network storage
  • Business class email and calendar
  • VPN access
  • Web server
  • WordPress website

I also learned that our challenges are:

  • We do not have 501c status yet (this could take months) which means we do not benefit from companies non-profit pricing schedules, and it will be harder to receive donated equipment.
  • The founder requires that everything is stored locally, she wants nothing in the cloud.

We couldn’t use Microsoft Server 2012 Essentials because of the email requirement and we certainly could not afford full Server 2012 and Exchange. I ended up going with Zentyal 3.3 which is a linux based small business server that gives *close enough* products that I thought would be a good fit considering all of our needs versus all of our challenges. (Added bonus, it’s free!) I purchased a HP ProLiant G7 N54L MicroServer an additional 500gb HDD and 4 GB of RAM. Which put us around $500 total for costs of server hardware. For networking I just went with the router/built it switch that the ISP provided.

Surprisingly It all went pretty well.

Everything was very simple to set up, it reminded me of Small Business 2008 is a lot of ways, the Zentyal GUI just walks you through it all and the entire build out took me maybe 4 hours of billable time. The only custom thing I had to do was install wordpress, which is a simple thing to do on linux, but this required me to change the management interface to listen on port 444 instead of 443. The entire build cost the client just under $3000.00 included the website I built out for them.

So what’s the catch?

Zentyal is not all there, yet. The domain acts like a Server 2000 domain, which is not necessarily a bad thing but if you get into a situation where you need to scale up, or add a windows server it could become a problem. OpenChange is still being proven and I’m genuinely un-sure of how it will perform over the long haul, Outlook 2010 seemed to think it was an exchange server so I have high hopes! Samba4 is not a Windows file server which could limit our ability to use Windows native network applications (Access, Quickbooks, Etc.). There is also the obvious red flags, the primary web server is also the primary domain controller, and mail server. All of the eggs are in one basket with no redundancy, maybe as funding increases and they receive their 501c we can revisit this project.

How would you have handled it?

I would love to hear about some other approaches from other geeks. What would you have changed? Would you have taken this project at all?

Who am I?

Hi there. My friends call me Jay! I’m nearly 30 and I work in IT.

I’m a technical leader and project manager with experience in data driven companies. I have worked both as a system administrator and software developer, and have several mobile and web applications in production today. 

Until recently I owned and operated my own consultancy where I served as the technical decision maker / adviser for several small businesses throughout the Frankfort area. My businesses main focus was web development and customized workflow solutions.

Lately I serve as Technical Director for my wife’s mixed media/graphic design company, and IT Director for a major transportation company.