DeployBot – Simplified DevOps – A Kubernetes SlackBot

Why production auto-deployments is a bad idea?

Most companies DevOps have setup multiple environments: dev, test ,stage, production. And if not you should be doing that today!
These enable you to reduce risk and ensure software engineering ships quality code.
You might have even hired a DevOps developer which has setup a CI/CD pipeline, helped Dockerize your app and run them in a Kubernetes Cluster.
This is a very common setup these days, it ensures developers can easily make code change, which case then be ready to test on the cloud in a matter of minutes.
This automation is blessed for most environments, when a developer wants to test a new feature, when someone needs to qa or approve the feature, etc.

DevOps – DeployBot

I don’t like the idea of auto-deploy to production. Many times we want to control and decide how and when we deploy to production.
Maybe we want to group together a bunch of features, maybe we want to deploy new features only on Monday, etc.
Enter DeployBot. This Slack ChatBot can deploy your application by sending a message on a secure Slack channel. As simple as that.

DeployBot Slack Configuration

The first step, is to setup the SlackBot application and channel for using SlackBot.
Also please note, this configuration is correct as per March, 2020, so future changes to the api might be needed:

  • First go to you Slack app page https://api.slack.com/apps?new_classic_app=1 and sign in.
  • Make sure you create a Class App as in the link I have attached above.Give it your DeployBot Name (something like DeployBot) and choose the workspace you would like to run in.
  • One the App Configuration Screen, from the Features and Functionality select the bots.<;li>
  • Add a Legacy Bot and give it a name and select a user. Something like DeployBot and Deploy_bot for example.
  • Click on oAuth and Permission, and scroll down to “scopes”
  • DO NOT CLICK UPDATE SCOPES!
  • Use the add oAuth Scope and add the following permissions: app_mentions:read, chat:write, im:write, incoming-webhook
  • After that click install into workspace (Auth and Permissions) and install your Slack Bot App
  • Take a note of the oAuth token and Slack bot Token (we will need them later)
  • Then go to your slack client, and click on apps (bottom left hand side list of contacts). you should see your slackbot there
  • Click on it and hover over the SlackBot name, make a note of the url, the last part is your slackbot ID.
    https://hexanow.slack.com/team/U012GGEU0HJ would mean that U012GGEU0HJ is your slackbot user id!

Now we are ready to configure the bot and install it into the workspace.

Bot Configuration and Installation

Now we need to setup the Deploybot installation in your organization.
This might require some simple DevOps skills, and perhaps some very basic Python skills.

  • Clone the project:

    git clone [email protected]:DoryZi/SlackKubeDeployBot.git

  • Edit your setup.yaml

    This file contains all your definitions. First we need to update the secrets yaml.
    In this section please update the AWS SECRET, you Slackbot oauth token from the Slackbot installation, and the Kuberentes Cluster Token and Token2.
    The KUBE TOKEN are not mandatory, the kubernetes setup will try to use in Cluster Authorization. If that fails, or if you Slackbot is running in a different cluster.
    You can use the AUTH TOKENS. The first one is the default Auth token for all your cluster, the second one, is used if you need to override for a particular app.
    For example if you have 2 different clusters you wish to run deploy bot on. You can also easily extend this setup to support multiple clusters. If you need to hire a python developer to help you setup this up, feel free to reach out to us.
    You need to base64 encode you secrets:
    echo "" | base64 -w 0

    More info on kuberenetes secrets

  • Configure environment variables and App Information

    So this step involves setting up your app configuration. This is contained in the ConfigMap part of the setup.yaml
    The env variables, bellow are pretty simple and strait forward, I will try to explain how the app config works.
    For each up app you wish Deploybot to recognize you must add an APP CONFIG entry.
    Each entry consists of:

    • “app-name” – name of your app, this will be the base when it looks for ECR entries for new builds too
    • “deployment” – the name of your kuberenetes deployment running this application.
    • “container-name” – the name of your kuberentes deplyoment container name
    • “cluster-token” – optional – a cluster token if this deployment runs in a different cluster to the default one. This is taken from KUBE_TOKEN2.
    • “cluster-endpoint” – optional a second cluster endpoint, if this deployment runs in a different cluster to the default one, This is tkane from KUBE_ENDPOINT2

  • Apply your configuration

    : kubectl apply -f setup.yaml

After all of this, your slackbot is ready to be used.
Feel free to create a secure channel, or directly message to your chat bot.
It should be able to run a few commands such as:
You can check for a new image in the registry (compared to what your deployment is running), check your running image, or deploy the latest image to the live.

Extending – a DevOps engineer or Python Developer task

This app uses the kubernetes API and aws API. It can easily be extended to run on GCP, however this would require you to have Python skills and DevOps skills.
We hope this helps, and that you find DeployBot useful.

Software Engineer Screening

Finding good developers and software engineers is always hard. One of the pain points I find is the interview process. My main grief is that the correlation between those that seem amazing during the interview / screening process but turn out to be duds, against those that seemed average turn out to be stars is very weak. It’s just so hard to find who is the right candidates to hire.

As such I’d like to outline some points that can help in making the right decision and reduce the margin of error. Important to mention, no matter how good your interview / screening process is. When you hire a full-stack or any type software engineer, you will not know for 100%. The only way to know if someone is a good fit is to work with him.
As such if you can take someone on for one week, as contractor to try him out that is the best option!

In this post today I’ll talk about the key points and methods that can help you mitigate some of the risk when you hire a software developer.

Structured Interview and Process

One of the key mistakes that people make when hiring a full-stack developer or any software engineer for that case is not having a repeatable and structured process. I cannot emphasis this enough, having a repeatable, consistent, well-thought hiring process. The means you need to

  • Think of your phone screen questions and prepare them.
  • You need to outline what technical questions you’ll ask in your coding or tech assessment and why.
  • Prepare what general HR questions you’ll ask the candidate.
  • You’ll need to figure out how many people would need to talk to the person.
  • Be able to communicate the process to the candidate early on.

These points are very important. Without them you’re really taking a guess based on feeling, emotions, and you have no way to measure all candidates fairly. Here is an excellent article from the New york times on the topic

Technical test should simulate work

The technical test / screen helps is as a quick filter to help find people that can potentially have the skills to work as a successful engineer in your company. It’s only potentially as most of these tests are not a reflection of what real work would be like. As such I would urge you to try and construct your tech test with that in mind. Try you best to simulate work and test relevant skills. Also when you’re hiring a full-stack developer you’re looking for people that have back-end and front-end experience so keep that in mind too. Here are a few ideas of what your tests / screens can include:

  • Code a simple algo problem – choose something simple, not too complex and let the person run through it. See that the person can code. So many can’t code despite their resumes looking mighty impressive!

  • Try to go through a bug issues and problem tracing – setup either a theoretical or practical (meaning a bug they have to figure) and see how the candidate solves the issues or tries to at least.

  • Test some front-end abilities – Code, HTML and CSS, something simple. You can even setup a react project and ask the candidate to go through it, and try to make changes, add layouts etc.

  • Ask about his work history and focus on the technical aspects. Then try to drill down and see how much he really understands. Hiring a Node.js developer or React.js Developer, or any JavaScript developer? ask him about prototypical inheritance, what it is and how does it work. If you are about to hire a python developer, then ask him what algorithm does .sort() run and how it works, etc.

  • Ask some architecture questions, and see how he things. Do things like memory limitations, time complexity limitations, how does he handle large scale throughput? see how that person thinks and how he approaches different situations.

This should give you a clearer indication of someone’s abilities, either way use a consistent and repeatable process. Now it’s not always critical that he solves everything, it’s important he understand, writes clean code and understands what he is doing. After all at work he will have a lot more time to work on problems than in a 60 minute test for example.

Focus on his abilities to deliver business results

Good engineers are able to bang out code, that works. However if you are able to hire a full-stack developer that is great, he will also understand the business goal behind what he is developing. A great software engineer will be able to not only require minimal supervision, but he will also be able to build software with business needs in mind. He can enhance or add additional benefits to the software as he suspect will be needed. Imagine you’re building a shopping app, and you have some quick search that let’s you scan all the products in your shop. You’ve asked you developer to add few more product categories to the drop-down. A good developer would just add those. A great one, might notice that this list is growing and growing, so maybe we should think about pagination. Or perhaps would suggest limiting the number of results back. Or perhaps adding de-bounce (sending the search only when you finish typing), etc. He would improve on said feature. Maybe he will just create additional todos and discuss with you. Either way he would raise those points. You can try to structure questions to see how a person might help in this way!

This should give you some important points to note when you’re looking to hire a full-stack developer, or any other type of senior software engineer. I hope this has helped.

Till the next time!

Software Engineers Interviews Mistakes – Why “Homework” task are silly

Recruitment is hard and Interviews are complex

Recruitment is a complex and difficult matter, just like Interviews are far from perfect and are very exhausting both for companies and candidates alike.

Engineers looking for new opportunities have a great dislike for the process. It includes lots of calls, interviews, tests, and more. It’s a common feeling for many that they just took on a second job: looking for work.

Companies, on the other hand, don’t have an easier time with it. They have to sift through loads of resumes, read an insane amount of emails, and answer tons of calls. All in order to decide who they will actually interview in person. There are so many candidates that sound and look the part while many are barely qualified to make coffee in real life.

Common mistakes in interviews

I’ve been involved in hundreds—if not thousands—of these processes and worn multiple hats while doing so, and I’d like to make a few observations and important notes to candidates and companies alike.

First you have to remember: Great interviews don’t mean great hires! Both sides have to remember this, as it’s a critical point! There are many things that you will not know:

  • Will that person be hard-working?
  • Will they not give up when confronted with hard tasks?
  • Will they be able to find creative solutions?
  • Will they be a good coder or not?

There are many other things you won’t know; you’ll only know if the person has potential and how well they do at interviews!

When we interview as candidates and as companies we get very excited about certain opportunities (great cultural fit, amazing performance on interview tasks, everyone seems so nice, the unexpected feeling of a strong work connection, etc.) No matter how logical, measured, or obscure your personal reasoning is about that candidate or company, you won’t really know what it means to work together until you actually work together

None of the big guys do it

Google , Facebook, Twitter and many of the big guys, could have easily sent homework task to all candidates, but they don’t! They spend a day or more with a candidate, they run through code together, and they get a sense for what that person is like. So why are you trying to re-invent the wheel? You’re not going to write you own front-end framework, you’ll use React or Angular, so why not also recruit as they do?

Why homework tasks are silly and what should we do?

Way too many companies send people take-home tasks or, better yet, some silly HackerRank that tests people on solving a problem in a very time-limited manner.

I’m not sure who in The Valley started this and made everyone follow this detached from reality pr.

You’re well-funded? That’s no reason to assign a technical a homework task. Feel free to offer it, as some candidates like it, but your best bet is to spend time with a person, solve a problem, code together, etc.

Since we agreed good interviews != good hires, then why not do your best to simulate the environment of solving a real task at work? Isn’t that what you’d want that person to do anyhow?
Run through some code problem together and get a sense for what it is like to work together.
You’ll see how a person thinks, how he/she tackles hard problems, and gain much more insight than you would from a random test or take-home task.

What is the logic behind sending some obscure test or asking someone to build a software for you for free? Are you trying to miss out on good candidates? Should someone that is busy spend half a day, a day, or even more writing free code to prove that he/she is worthy of employment? Maybe that’s okay for recent graduates, but what about for people with 5–10 years’ experience or more? What profession in the world does that?

I’m a big believer in fairness, and if you ask someone to invest time then be willing to invest the same time yourself as well. While it will be more time-consuming, you will both have the chance to work through a task together and you’ll get a good sense for working with each other.

When homework tasks make sense and how to give them?

Personally, I say only if you’re willing to pay that person for their time and show that you value their time. Say you’re a starving startup—pick a small task, offer it as a stand-alone project, and assuming that the code is good, the candidate would sign over the rights and you might even use it. Then pay them for their time except if, of course, the code is bad and they do not pass.

In the next part I’ll talk about more interviewing tips and suggestions. Stay tuned!
D.

Cost Effective Docker Jobs on Google Cloud

Recently, I wanted to run some jobs using docker images. I’m a huge advocate of using Docker, so naturally I was going to build a Docker image, run my Python scripts, and then schedule said job to run on a configurable schedule basis. Doing so on AWS is pretty easy by using lamda and step functions; however, since this wasn’t a paid gig and I wasn’t able to get someone to fork the bill, enter Google Cloud!

Google Cloud Platform (GCP) is, in a way, the new kid on the block. AWS has a long history with the cloud platform and excellent customer support, whereas Google’s customer service is a bit like Bigfoot: you’ve heard of it, some people say they’ve seen it, but it doesn’t really exist. However, Google is still an amazing tech company: they release early and they improve their products to make them awesome(e.g. Android). And best of all, they offer 300 free credits. So I decided to go for Google, how bad could it be?
In this post, I’ll talk about how I set up the Google Cloud to work for me. It took blood, sweat, and tears, but I got it working. I scheduled a job occasionally: I spin up a cluster of instances, run the job, and shut it down! Not only is that cool (ya, I’m a geek), it’s also quite cost-effective.

I will outline what I did, and even share my code with you:.
Here goes:

Step 1 – Build docker image and push to Google Cloud private registry

The first step was the easiest and most trivial. It is pretty much the same as AWS.

Create a build docker image

Let’s start with creating a build image. GitLab CI allows you to use your own image as your build machine. If you’re using a different CI, I leave it to you to adjust this for your own system.

This a Dockerfile for the build machine. It uses a docker machine, pulls pip, and installs gcloud.

Then I push this build image to docker-hub. If you haven’t done this before you need to:
1) Singup to docker cloud https://hub.docker.com and remember your username.

2) In the build machine folder, run docker build . -t /build-machine
3) Run:

Create a GCP service account

You have to create a service account, give it access to the registry, then export the key file as JSON. This is very simple step. If you’re unsure how to do it, just click through the IAM / Admin – you need to create a user, give it an IAM and export the key.

Customize CI Script to push to private registery

Once this is all done and you have your build machine, we can work on your CI script. I will show you how to do this on GitLab CI, but you can adapt this for your own environment. First create a build environment variable called CLOUDSDK_JSON and paste the contents of the JSON key you created in the previous step as the value of that key. Then add the following: .gitlab-ci.yaml file to your project.

Adjust the job-image-name to your job docker image name, service_account_name to the service account name you created and the build image to the image you pushed to docker hub. This YAML file is directed as a python job, but you can change it to any other language.
I have 3 stages: build, test, and deploy.
I build and test on all branches, but only deploy on master. GitLab CI has an issue, each step can happen on a different machine, so my first build step isn’t kept to the deploy phase, which forced me to re-build in the deploy phase.

Once this is done, you CI system should be pushing your image to your Google private registry, well done!

Step 2 – Running Jobs in a Tеmp cluster

Here comes the tricky part. Since jobs only need to run every x time, and only for a limited period, it’s ideal to be run as a Google function. However those are limited to one hour, and can only be written in JavaScript (AWS support multiple languages with lamda and with state machines). Since I didn’t want to pay for full-time cluster time running, I had to develop my own way to run jobs.

Kubernetes Services

Controlling jobs in a cluster and cluster control can be achieved using Kubernetes. This is one part of GCP that really shines: it let’s you define services, jobs, pods (a collection of containers), and then run them.

To do this, I wrote a Kubernetes Service class in Python that will:
– Spin up / create a cluster.
– Launch docker containers on the cluster.
– Once jobs finish, shut down the cluster.

This is the class and constructor. The full code for this class has more configuration and env variables, as is part of the App Engine Cron project. I will include repo if you want full details on how to achieve this.

This function can control the cluster size. It can spin it up before jobs need to be run, then shut it down afterwards:

kubernetes_job function creates containers (an additional function that creates container objects with env variables. Containers are then part of a pod, and that pod is part of a job template that is part of a job spec. You can read more about it in the Kubernetes docs.

If you don’t want to code to continue to wait for the jobs, you can poll for completion, and that is what shutdown_cluster_on_jobs_complete is for. It will shutdown the cluster once there are no running jobs.

This class controls the entire job scheduling and ensures their execution is successful.
It’s part of an appengine (however, they can be used independently).
Next we need to have this script scheduled or triggered to activate.
And that is our cron scheduler task.

Cron scheduler appengine service

Sadly, Google doesn’t give you an easy way to run code in the Cloud; you actually have to write more code to run code (silly, right?)

The concept is that the appengine provides you with a cron web scheduler that calls you own apps endpoints in given intervals.

First, you add cron.yaml to your project and you configure which endpoint and time interval to hit that endpoint:

Then we can add a handler to shut down the jobs, and to kick them off.

Last we want to add a Setting class to load env like variables from the datastore:

Note that most of the app depends on the datastore. Sadly, Google doesn’t allow you to have env variables easily, but you can set up env variables in the datastore.
For this I added a class called Settings.

Then we just add bind the route handler:

This should allow our app to spin up a cluster, launch containers, and then shut down the cluster. In my code, I also added a handler for the shutdown.

Then make sure you have gcloud installed (here is how and just deploy the appengine using the gcloud deploy command and you should be good to go ( here is how
While my example runs the same docker image, and just has different operation with different env variables, you can easily adjust this code to suit whatever need you might have.
Here is the full git repo: gcp-optimized-jobs
Hope you find it useful!

testing javascript code with chai, mocha,sinon on circleci

JS Testing Survival (mocha, chai, sinon)

This post is a simple guide to JS testing with Mocha, Chai, Sinon on CircleCI. It will show you how to setup for testing, some great tips for good coverage and more.
I’ll cover some best practices I use for testing JS code. It’s not official best practices, but I use these concepts as I found they make it easier to get easy to read test with full converge and a very flexible setup.

This post will dictate a unit test file to see the different points I found helpful when composing unit test files:

Setup

mocha is a testing framework for js, that allows you to use any assertion library you’d like, it goes very commonly with Chai. Chai is an assertion library that works with mocha. chai You can read there about how mocha and chai work, how to use it and more.
One of chai’s strong points is that you can easily extend it using support libraries and plugins. We will use a few of them, so let’s first setup our dependencies in our project:

We are installing a few liberaries:

  • mocha – js testing framework.
  • chai – the chai library, has a good reference for how to use chai to assert or expect values, and a plugin directory – This is a valuable resource!
  • chai-httpchai-http – This is a chai extension that allows us to hit http endpoints during a test.
  • chai-as-promised – mocha support tests / setup that return a promise. This enables us to assert / expect what the result of the promise would be. We will see this in action shortly.
  • co-mocha – a mocha extension that allows us to use generator functions inside mocha setup / mocha test. If you do not do this step, and try to use a generator function, the test will finish and will not run yield correctly in test code. This means you will have twilight zone like results, of tests passing when they should fail!
  • sinonjs – cool test mocks, spies and stubs for any JS framework. Works really well, and very extensive

After we install all the packages, let’s create a new file, and add all the required libraries to it as follows:

In this example I’m testing an express server, but you can use any type of node http server (assuming you are testing a server). Just make sure you export the server from you main or server file, and then you can require it from your test files.

Grouping tests using ‘describe’

Mocha does a great job at grouping tests. To group tests together, under a subject use the following statement:

‘describes’ are also easily nest-able, which is great. So the following will also work:

This groups them together, and if you’re using something like intelliJ or webstorm then the output is displayed in a collapsible window very nicely:
unit-test-run-example.PNG

Test hooks

When running tests many times we need to do setup before each test, before each test suite. The way to do that is to use the testing hooks before, after, beforeEach and afterEach:

Also these hooks can return a promise, the test framework will not continue until the promise is resolved, or will fail it is rejected:

Also since we have require co-mocha, our hooks can also run a generator function:

I can then use the stuffINeedInTest in my test files. You can also do this setup using promises as shown above.

Hook on root level

Test hooks are awesome, but sometimes we might want some hooks to run not only once for a test file, but once for all our tests. mocha does expose root level hooks, so in order to achieve that we will create a new hooks file: root-level-hooks.js
and put our hooks in there with no describe block around it:

Then at the top of each test file we will require this file in:

This way our hooks will run once for all test runs. This is the perfect place to load up a test db, run some root level setup, authenticate to system etc.

External System Mocking

Some systems / modules call other systems internally . For example think of a functions that processes a payment for an order. That function might need to call a payment gateway, or after the order is processed, send the shipping information to a another system (for example a logistics system or upload a file to s3). Unit test are intended to be very stand alone, and not depend on external systems. Therefore we need a way to mock those external systems, so when the tested code reaches out to these systems ,the test case can respond on its behalf.

In our test we will use sinon.
Basically we will mock the calls using a test class or mocked calls, that reads a response file and send it’s back.
This makes the mock strait forward:

What we are doing here is creating a mock object, in this case we are mocking the axios, as my server code uses it, but we can use the same construct to mock any external system.
Our request mock will provide a get and a post methods, just like the axios library does. I’m using the sinon.spy to check what URL is requested by the module code, and a switch statement to handle the different urls requested by the module. Our mock can return urls, json, promises, files, or whatever is needed to successfully mock the external system.

I’m then using the before hook to register the mock as axios mock, so when the module called require(‘axios’) it will receive my mock and not the node_module that actually does the http request.

Then I’m using the after hook, to disable the mock and return to normal.

Test Cases

Mocha let’s us create tests very easily. You use the ‘it’ keyword to create a test.
Either:

Or using generators

You can also use the done callback, but I prefer not to use it.
I like to keep code a small as possible, and without any distractions.
However it’s here if you need it

<

Each test case is composed out of two parts:
1) The test itself
2) Expected result

Test themselves

Since we have added the mock for external system we can safely use our test code to hit a function, or if we are testing a rest endpoint we can call that endpoint:

In this example we are testing an endpoint, but calling a function would have been even easier.

Expected Result

The second part is includes looking at the results of our test runs and we will be using chai to look at the responses. chai provides a long list of ways to look at responses either using expect, should or assert, whichever you prefer.
I try to use expect often as it doesn’t change the Object.prototype. Here is a discussion on the differences expect vs should vs assert

Failing these will trigger the test to fail.
I normally use a test helper class with a few standard ways to test for correct response and to compare return object to the expected object:

Test for failures

Using promises, I can also quickly test for failures to ensure our code doesn’t only work properly for valid input, but it should also work for invalid input.

I can test to see that code will fail with bad input:

TestUtils class

TestUtils is a utility class that I created with some expected results that allows to easily test for missing fields, to iterate the body for all the fields I expect or for a simple 200 and body.

I then require the TestUtil class in my test file, and then I can use the test utils for quickly expecting or asserting different conditions.

Mocha tests on circle

When using CircleCI, it’s great to get the output of the test into the $CIRCLE_TEST_REPORTS folder, as then circle will read the output, and present you with the results of the test, rather than you looking through the logs each time to figure out what went right and what went wrong. Circle guys have written a whole document about that, and you can see it CircleCi Test Artifacts.

In our discussion we will focus on using mocha and getting the reports parsed. In order to do so, we need mocha to output the result in junit xml format. This can be achieved easily using the mocha-junit-reporter. This lib will allow mocha to run our test and outpu the results in the correct format.

So the first step is to run

And to add in package json output in junit format:

This output the information in the junit folder for both eslint (if you are using it) and for mocha.

Now all that is needed is to create a link between your junit folder and the CIRCLE_TEST_REPORTS, which can be done by editing the circle.yml file and adding the following line in the pre step for test.

If you aren’t using docker, you can also add a symbolic link after the creation of the folder - ln -s $CIRCLE_TEST_REPORTS/junit ~/yourProjectRoot/junit

However if you are using docker-compose, or docker run to execute your test inside a will also need to add a volume that maps you test output to the CRICLE_TEST_REPORTS.
For docker compose:

for docker run you can do the same with using the -V command.
Once that is done, you’ll get the report output in circle after the build finishes.

Good luck!

REST API

REST Endpoints Design Pattern

In this post I’ll present a suggested design pattern and implementation for this design pattern using a Node + Express REST API with ES Classes. Personally, I hate writing the same code again and again. It violates the DRY principle and I hate to waste my time and my customers’ time. Being a C++ developer in background, I love a nice class design.

In today’s microservices and web, REST endpoints have become somewhat of the de-facto way to connect services and web applications. There are loads of examples how to create REST endpoints and servers using Node.js and Express 4.0. SOAP, which was popular a while back, has given way to JSON. New technologies like GraphQL have not made it to mainstream yet, so for now we are stuck with REST and JSON.

I haven’t found a tutorial that discusses how to do this using ES6 classes and a good class design. This is what we will cover today.

Rather than building REST endpoints over and over, my concept is to have a base router implement base behavior for the REST endpoint, then have derived classes override such behavior if needed.

We create an Abstract Base Class, with all the default route handlers as static methods. Those will take a request, process it (most likely read / write / delete / update the DB) and return the results. Then the SetupRoutes, will be the glue that binds the static methods to the actual routes. In addition our constructor will take a route name which will be the route path that will be processed.

Then derived classes can either disable certain routes, or override routes as need be, while maintaining the base behaviour, if that is what is needed (for example when wrapping a service, or doing simple DB operations).

RESTClassDesign

Now let’s implement this in JavaScript using Node.js, Express and ES Classes. I’m going to implement this example using MongoDB and Mongoose, but you can use any other DB or service you wish. The Mongoose in this code sample is pretty meaningless, it’s just for the sake of the example.

Create a new project folder, and call npm init inside it.
Then install express and required libs: npm install express body-parser cors bluebird mongoose change-case require-dir --save

Then I’ll create the server.js main file (we won’t discuss this in detail, as it’s mostly a node/express server. The one line that’s important to note is require('./routes/index')(server,db); as this will create all the routes for our application).

I’m including a single route file, which will build up all our routes. So let’s look into that index.js file, to see what’s going on in there:

I like to use automatic glue code, rather thant re-type or build a static array. This way we have the system detect new routes and add them automatically, just by adding a file to a folder.

  1. I’m using require-dir which will include all route handlers. I wanted each route to handle it’s own paths, and not the global paths (I like encapsulation). So as a design decision I made the filename the subroute file.
  2. I then create an instance of the route handler class, passing it a reference to the dbDB (so it can do it’s thing).
  3. setupRoutes() returns a router, which I then connect to our server. I’m building on server.use of the express router , to bind routes to the baseurl. If you adpot this impementation you can always use your own structure.

Next let’s look at the base-router-handler which is the base to all route handlers. It will contain most of the code for any endpoint:

I wanted to use generators, as I like their async / await like structure. So I wrote a co-wrapper file that will handle errors and the generators’ routes correctly, including wrapping with a promise. I do not wish to go into depths explaining it, as it’s not the point of this post. But you can see this file, in the git repo.

Next we create the base constructor, which takes the route name and (?). It creates the binding to a collection / table / service / anything else you want. It also calls the middleware setup; if you wish to bind your route based middleware, you can override this function in derived classes.

Next I go through and create static route handlers for each route. As you can see the route handlers are pretty simple: take json in, perform some DB operation and return the result. In other examples you might have more complex behaviour. The nice thing is the base creates a default behaviour, but by overriding the static methods in dervied classes we can do whatever we wish to do.

Once the baseclass is ready we can now create a real route, that will do something!
Let’s create a ‘route-handlers’ folder inside the ‘routes’ folder and add a file called companies.js.

First look at how easy it was to create a new route. We didn’t need to write even this much code. We could just create the constructor and be done with it, if we wanted the same behaviour as the base class.

I did want to show, though, how easy it is to override the code without requiring much work. The base class provided us with a basic implementation for notImplemented[is “basic” an adjective instead of a specific type of implementation?], which makes it easy to disable routes.

Even adding a route is easy. Just add a handler implementation of your own. Makes it easy to test just the functionality and not have to re-write the same code over and over.

That’s all for now!

Hope your enjoyed this, or found this useful.

concurrency issues

Watch out for globals in node.js modules!

globals, or global variables are known to be risky.
However using the ‘var’ keyword should ensure file level definition.
As such shouldn’t it be safe to use module level variables?

The answer is no, and it should be avoided at all costs.

why module level variables are bad?

Node require will wrap your module with a function as follows:


~ $ node

require('module').wrapper
[ '(function (exports, require, module, __filename, __dirname) { ',
'\n});' ]

The calling node will assign to these arguments when it will invoke the wrapper function.
This is what makes them look as if they are globals in the scope of your node module.
It seems we have globals in our module however:
– export is defined as a reference to module.exports prior to that.
– require and module, are defined by the function executed.
– __filename and __dirname are the filename and folder of your current module.

caching – a double edge sword

Node will then cache this module, so the next time you require the file, you won’t actually get a fresh copy, but you’ll be getting the same object as before.
This means you’ll be using the same global modules variables in multiple places, which means danger!

Here is a code example that illustrated the problem:

I’m running here two calls to the same function, with a small delay between each call, after a few runs we will notice that the function will run over each others variables. Which is an example of a module global issue.

How to solve globals?

There are multiple potential solutions to this global issue, I’ll present you with two potential solutions

Solution 1 – Functional

If we define a local scope inside our module, we can return a new set of variables for each run.
We will use a ‘let’ keyword, along with a scoped function (not needed, but nicer and better scope control).

Solution 2 – use Classes

We can just define a class then create a new class for each run.
This way each variable is a private member of that class, ensuring proper encapsulation.

For complete code have look at this repository:
https://github.com/CoreTeamIO/globals-in-node-modules

aysnc JS

Waiting for things the JavaScript way…

JavaScript is filled with an abundance of libraries, frameworks, and acronyms that would make any conversation between two web developers sound like they are about to fly a spaceship to colonize Mars.
If you don’t believe me, check out this funny post:
How it feels to learn JavaScript in 2016
[If this post gets a high bounce rate I suggest deleting the rest of the paragraph after Mars, on account of the link]
As such writing Async JS is no different or less confusing.

In this post I’ll try to bring clarity to asynchronous code in Javascript. I’ll focus on back-end node.js code, but a lot of it also applies to the front-end.
Let’s first cover async JS mechanisms we have in Node:

  • Callbacks
  • Promises
  • Generators
  • Async / Await

I have not included things like observers, async.js and events, as they are not exactly the core of JS. For example, events rely on an async js mechanism (such as callbacks). Many of the observer mechanisms are used mainly in front-end patterns today, and async.js is an external library which I stopped using. However if you want to learn more I suggest you look these up.

Callbacks

Callback functions are the most basic types of async code, and are common not only to Javascript but to many other languages.
Callbacks are simple to understand. Callbacks are simple functions passed as arguments, that are called when the called function is finished.

Very simple and straightforward. The main problem with callbacks is that when these are all chained together, as many operations are in async, you’ll end up with loads of callbacks which is a nightmare to read, manage or follow. This is called callback hell.

Promises

Promises are a different way to handle asynchronus code that allows for easier managment of async code, yields easier code flow, and uses exceptions for errors, uniform signatures and easy composition, meaning we can chain promises together!

Promises are a bit like real life promises. Imagine your boss promising you a promotion next quarter. You don’t know if you’ll get it or not, and you’ll know that only in the future. Promises have three states: resolved, rejected and pending.

A promise constructor takes two parameters, reject and resolve, which will be called when the promise finishes and returns a chainable promise object.

This might look more complex, and for very simple situations you might be right. But let’s look at the chainable .then and .catch (for success and failure of a promise).

As you can see this allows for chaining of promises, which creates sequential code. Sweet!

Prior to ES6 promises were supported using external libraries such as Bluebird, Q , RSVP and many others. However they are now also a part of the coding lanaguge, as promises are that important.

Promises deserve a post of their own so here is some more reading if you want to dive in and understand them better:

Generators

Generators are not designed to be an asynchronous mechanism per say. Their intent was to create an iterator-like functionality in the lanaguge; however they are often used to create cleaner looking, synchronous-like code. This is built on the fact that generators can be paused and resumed. Once again generators deserve a post of their own, so I will add additional reading links at the bottom of this section.

Generators landed in ES6, and can be created by adding a ‘*’ after the function keyword (or before, in class members):

The nice thing about generators is that inside a generator function you can pass the control to another generator *yield or to a promise / value with yield:

As you can see you can the code becomes simpler. You can even wrap a generator into a promise easily with a coroutine (Bluebird has a coroutine, for example).
As you can see, promises and generators co-exist nicely!

Here is some further reading, if generators are still not clear:

Async / Await

Async/Await is not part of ES6 sadly, but only ES7. The use of generators and promises, while nice, is not very clean. It requires a lot of wrapping, and the intent of generators was to provide an iterator, not an async mechanism. This is where async / await shines, as it is a cleaner way to handle promises and asyncronous code in a sequential manner:

All you have to do is define an async function (with the async keyword), then enter an await keyword from your promises, much like the generator yield, but with less mess:

As you can see the code is clean, but didn’t require any wrapping, or using generators. Adding just two more keywords allows us to use promises everywhere (promises tend to be faster than generators).

Further reading:

Fun Learning about JavaScript and ES6!

Last week the guys at Dev.BG invited me to talk about JavaScript and ES6 Features. It was a lot of fun and I’d like to share some of the pictures from the event and the presentation!

dsc_7109

dsc_7138

dsc_7125

dsc_7115

dsc_7106

14713605_217423008676926_3774777425349225610_n

14708370_217422985343595_423333587942732353_n

14642374_217423155343578_7318550587700001399_n

dsc_7148

dsc_7147

dsc_7137

dsc_7127

dsc_7125

dsc_7155

dsc_7101

dsc_7109

dsc_7137

14650141_217423055343588_3723962161275024831_n

14713587_217423092010251_2731930216029834874_n

Presentation in PowerPoint Format

Presentation in PDF Format

The love story that brought me to Sofia

Most times I write tech posts, or professional posts, this time I decided to write something more personal.
I’ve been asked why I moved to Sofia countless times that I decided to write a blog post about it.

The short answer: FOR LOVE!

The long answer:

Israel

In 2007 I moved back to Israel after many years in Australia. I didn’t know yet, but I was about to spend the next 7 years there working with various startups. Some of my endeavors were more successful some less, it was an interesting journey.

In 2012 I join a startup as a co-founder, responsible for marketing, sales and biz-dev. The company had 25 products and I recommended we focus on one. As I started to market it, I saw the product was lacking and knew that with the right app we would get acquired. So instead of marketing I ended up being a developer and weekend marketer.I spend 90% of my time writing extension for browsers in C++, building mobile apps, and websites. And not long after, as I predicted we got some interest, and I was able to negotiate a very impressive deal to get acquired.

I was happy and proud, I felt I made it. However in a short time, everything unraveled, the company had structural issues (I had daily discussions about this with my co-founder much before the acquisition offer arrived, but I was a minor shareholder and joined him, so it was impossible to put my foot down) and at the same time my engagement to my girlfriend quickly deteriorated. I was even able to raise some funding to try and save our company, but it turned out the investors weren’t honest with us. It was a nightmare, and I ended up hitting rock bottom without a company, and a broken heart…

SF, Europe and other places

I was in a constant self-debate what to do next, where to go. I was thinking of moving to SF but wasn’t sure I was ready for that. I started working with a cool company in SF, building various systems for them. I consulting and travelling between Israel and USA and various other places as I was working remote.

Belgrade

In one of my travels I had to go to Stockholm to work with a designer. I happen to miss the direct flight out of Israel, and the only other flight out connected through Belgrade. I had a 9 hour overnight connection and remember this crazy Serbian guy that lived in Tel Aviv. We went out that night and at the end I was captivated. Amazing nightlife and the incredible Balkan women, so stunning and friendly, exactly the cure a broken heart needs!

On my way back from Stockholm I booked two weeks in Belgrade to investigate the matter further. I found Balkan people (Serbian only at that stage) very nice, helpful, friendly, and intelligent. I decided I have to check the local tech scene, as I’m a geek. I booked meetings with various companies, startup accelerators and what not.

Coming from SF and Tel Aviv I have certain expectations, or world concept of what tech companies look like. While visiting the various companies I notices some do seems have a fun work environment and offices, but most had developers stuff up in rooms, like they were cattle. Many of the offices were gray and sad, I literally felt like this is the place that souls go to die. 95% of the companies were doing outsourcing and the tech community was bootstrapping itself. I was excited to have found an interesting opportunity!

I decided to take a brave step and move there. I flew back to Israel packed my things, and in late 2014 I gave back my flat, sold my car, packed all my possessions and moved to Belgrade. I decided to officially move to Serbia, but still visit Israel, the US and several other places until I decide for good where I’d like to spend the next few years of my life.

In November 2014 I left Israel and landed in Belgrade. I was house hunting, connecting with people and understanding the scene. During my first week, a friend of mine organized a fun night out. She also invited this girl who was about to be her roommate and a setup for me.
That night my future’s friend roommate and I, really hit it off, and we are together ever since that first night. Dragana turned out to be a great listener, always there to support me and be there for me as I come and go from Serbia and get obsessed about building products and technology discussions.

In 2015 I was building teams in Serbia, hiring people and flying all over, trying to decide if I can stay to live in Serbia, move back to Israel or move to the USA. I was living in Belgrade, but I was in something like 10-12 countries. It was super difficult and our relationship started to take the toll.

I was considering staying in Serbia longer, as we already had an apartment there and I did work with people in Belgrade, but the more business I did there, the more I realized how hard it is. While I found great people, the legislative restrictions and the government that make is very difficult to do business, I just couldn’t live there for the next 5 years, as much as I wanted too.

So what do you do? You have an amazing girl you’ve found and you want to build a life with her. You also really like the Balkan but cannot do business in the place you live? You look for alternatives.

Sofia

And that’s when Sofia came into the picture. I’ve been to Sofia many times, had friends there and knew the tech community was super active. Dragana was looking into exchange programs and Sofia was close enough to Belgrade that she can continue coming back and forth. Sofia is also going through some transformation, the city is re-developing, and I’m seeing many tech guys starting to build products and not just services. It’s very exciting to maybe be part of such a community. I also knew it was EU which made it lots easier to do business with and on top of all that, the best ski resorts in Eastern Europe (It’s now almost in June and I’m in a ski resort, I’m an addict). So late 2015, we decided we are going for it, and will move permanently to Sofia.

It took a lot of research on my end, and lots of paperwork, frustrations and nerves but in early April 2016 our home was finally only Sofia, and not any other place. So far I’m really impressed with Sofia. Sofia has 1% unemployment, loads of very talented people and you feel the tech community is growing. I’m very optimistic about the future in Sofia!

I would like to sign this post with a big thank you to my girl. She is always there for me, listening to me talking about all sorts of ideas, work, my constant travels, my self obsessed workaholic nature (I tend to work 12-15 hours a day), and at times I forget about her, yet she is there to take care of me, and give me lots of love and support. Thank you my love!