testing javascript code with chai, mocha,sinon on circleci

JS Testing Survival (mocha, chai, sinon)

This post is a simple guide to JS testing with Mocha, Chai, Sinon on CircleCI. It will show you how to setup for testing, some great tips for good coverage and more.
I’ll cover some best practices I use for testing JS code. It’s not official best practices, but I use these concepts as I found they make it easier to get easy to read test with full converge and a very flexible setup.

This post will dictate a unit test file to see the different points I found helpful when composing unit test files:

Setup

mocha is a testing framework for js, that allows you to use any assertion library you’d like, it goes very commonly with Chai. Chai is an assertion library that works with mocha. chai You can read there about how mocha and chai work, how to use it and more.
One of chai’s strong points is that you can easily extend it using support libraries and plugins. We will use a few of them, so let’s first setup our dependencies in our project:

We are installing a few liberaries:

  • mocha – js testing framework.
  • chai – the chai library, has a good reference for how to use chai to assert or expect values, and a plugin directory – This is a valuable resource!
  • chai-httpchai-http – This is a chai extension that allows us to hit http endpoints during a test.
  • chai-as-promised – mocha support tests / setup that return a promise. This enables us to assert / expect what the result of the promise would be. We will see this in action shortly.
  • co-mocha – a mocha extension that allows us to use generator functions inside mocha setup / mocha test. If you do not do this step, and try to use a generator function, the test will finish and will not run yield correctly in test code. This means you will have twilight zone like results, of tests passing when they should fail!
  • sinonjs – cool test mocks, spies and stubs for any JS framework. Works really well, and very extensive

After we install all the packages, let’s create a new file, and add all the required libraries to it as follows:

In this example I’m testing an express server, but you can use any type of node http server (assuming you are testing a server). Just make sure you export the server from you main or server file, and then you can require it from your test files.

Grouping tests using ‘describe’

Mocha does a great job at grouping tests. To group tests together, under a subject use the following statement:

‘describes’ are also easily nest-able, which is great. So the following will also work:

This groups them together, and if you’re using something like intelliJ or webstorm then the output is displayed in a collapsible window very nicely:
unit-test-run-example.PNG

Test hooks

When running tests many times we need to do setup before each test, before each test suite. The way to do that is to use the testing hooks before, after, beforeEach and afterEach:

Also these hooks can return a promise, the test framework will not continue until the promise is resolved, or will fail it is rejected:

Also since we have require co-mocha, our hooks can also run a generator function:

I can then use the stuffINeedInTest in my test files. You can also do this setup using promises as shown above.

Hook on root level

Test hooks are awesome, but sometimes we might want some hooks to run not only once for a test file, but once for all our tests. mocha does expose root level hooks, so in order to achieve that we will create a new hooks file: root-level-hooks.js
and put our hooks in there with no describe block around it:

Then at the top of each test file we will require this file in:

This way our hooks will run once for all test runs. This is the perfect place to load up a test db, run some root level setup, authenticate to system etc.

External System Mocking

Some systems / modules call other systems internally . For example think of a functions that processes a payment for an order. That function might need to call a payment gateway, or after the order is processed, send the shipping information to a another system (for example a logistics system or upload a file to s3). Unit test are intended to be very stand alone, and not depend on external systems. Therefore we need a way to mock those external systems, so when the tested code reaches out to these systems ,the test case can respond on its behalf.

In our test we will use sinon.
Basically we will mock the calls using a test class or mocked calls, that reads a response file and send it’s back.
This makes the mock strait forward:

What we are doing here is creating a mock object, in this case we are mocking the axios, as my server code uses it, but we can use the same construct to mock any external system.
Our request mock will provide a get and a post methods, just like the axios library does. I’m using the sinon.spy to check what URL is requested by the module code, and a switch statement to handle the different urls requested by the module. Our mock can return urls, json, promises, files, or whatever is needed to successfully mock the external system.

I’m then using the before hook to register the mock as axios mock, so when the module called require(‘axios’) it will receive my mock and not the node_module that actually does the http request.

Then I’m using the after hook, to disable the mock and return to normal.

Test Cases

Mocha let’s us create tests very easily. You use the ‘it’ keyword to create a test.
Either:

Or using generators

You can also use the done callback, but I prefer not to use it.
I like to keep code a small as possible, and without any distractions.
However it’s here if you need it

<

Each test case is composed out of two parts:
1) The test itself
2) Expected result

Test themselves

Since we have added the mock for external system we can safely use our test code to hit a function, or if we are testing a rest endpoint we can call that endpoint:

In this example we are testing an endpoint, but calling a function would have been even easier.

Expected Result

The second part is includes looking at the results of our test runs and we will be using chai to look at the responses. chai provides a long list of ways to look at responses either using expect, should or assert, whichever you prefer.
I try to use expect often as it doesn’t change the Object.prototype. Here is a discussion on the differences expect vs should vs assert

Failing these will trigger the test to fail.
I normally use a test helper class with a few standard ways to test for correct response and to compare return object to the expected object:

Test for failures

Using promises, I can also quickly test for failures to ensure our code doesn’t only work properly for valid input, but it should also work for invalid input.

I can test to see that code will fail with bad input:

TestUtils class

TestUtils is a utility class that I created with some expected results that allows to easily test for missing fields, to iterate the body for all the fields I expect or for a simple 200 and body.

I then require the TestUtil class in my test file, and then I can use the test utils for quickly expecting or asserting different conditions.

Mocha tests on circle

When using CircleCI, it’s great to get the output of the test into the $CIRCLE_TEST_REPORTS folder, as then circle will read the output, and present you with the results of the test, rather than you looking through the logs each time to figure out what went right and what went wrong. Circle guys have written a whole document about that, and you can see it CircleCi Test Artifacts.

In our discussion we will focus on using mocha and getting the reports parsed. In order to do so, we need mocha to output the result in junit xml format. This can be achieved easily using the mocha-junit-reporter. This lib will allow mocha to run our test and outpu the results in the correct format.

So the first step is to run

And to add in package json output in junit format:

This output the information in the junit folder for both eslint (if you are using it) and for mocha.

Now all that is needed is to create a link between your junit folder and the CIRCLE_TEST_REPORTS, which can be done by editing the circle.yml file and adding the following line in the pre step for test.

If you aren’t using docker, you can also add a symbolic link after the creation of the folder - ln -s $CIRCLE_TEST_REPORTS/junit ~/yourProjectRoot/junit

However if you are using docker-compose, or docker run to execute your test inside a will also need to add a volume that maps you test output to the CRICLE_TEST_REPORTS.
For docker compose:

for docker run you can do the same with using the -V command.
Once that is done, you’ll get the report output in circle after the build finishes.

Good luck!

REST API

REST Endpoints Design Pattern

In this post I’ll present a suggested design pattern and implementation for this design pattern using a Node + Express REST API with ES Classes. Personally, I hate writing the same code again and again. It violates the DRY principle and I hate to waste my time and my customers’ time. Being a C++ developer in background, I love a nice class design.

In today’s microservices and web, REST endpoints have become somewhat of the de-facto way to connect services and web applications. There are loads of examples how to create REST endpoints and servers using Node.js and Express 4.0. SOAP, which was popular a while back, has given way to JSON. New technologies like GraphQL have not made it to mainstream yet, so for now we are stuck with REST and JSON.

I haven’t found a tutorial that discusses how to do this using ES6 classes and a good class design. This is what we will cover today.

Rather than building REST endpoints over and over, my concept is to have a base router implement base behavior for the REST endpoint, then have derived classes override such behavior if needed.

We create an Abstract Base Class, with all the default route handlers as static methods. Those will take a request, process it (most likely read / write / delete / update the DB) and return the results. Then the SetupRoutes, will be the glue that binds the static methods to the actual routes. In addition our constructor will take a route name which will be the route path that will be processed.

Then derived classes can either disable certain routes, or override routes as need be, while maintaining the base behaviour, if that is what is needed (for example when wrapping a service, or doing simple DB operations).

RESTClassDesign

Now let’s implement this in JavaScript using Node.js, Express and ES Classes. I’m going to implement this example using MongoDB and Mongoose, but you can use any other DB or service you wish. The Mongoose in this code sample is pretty meaningless, it’s just for the sake of the example.

Create a new project folder, and call npm init inside it.
Then install express and required libs: npm install express body-parser cors bluebird mongoose change-case require-dir --save

Then I’ll create the server.js main file (we won’t discuss this in detail, as it’s mostly a node/express server. The one line that’s important to note is require('./routes/index')(server,db); as this will create all the routes for our application).

I’m including a single route file, which will build up all our routes. So let’s look into that index.js file, to see what’s going on in there:

I like to use automatic glue code, rather thant re-type or build a static array. This way we have the system detect new routes and add them automatically, just by adding a file to a folder.

  1. I’m using require-dir which will include all route handlers. I wanted each route to handle it’s own paths, and not the global paths (I like encapsulation). So as a design decision I made the filename the subroute file.
  2. I then create an instance of the route handler class, passing it a reference to the dbDB (so it can do it’s thing).
  3. setupRoutes() returns a router, which I then connect to our server. I’m building on server.use of the express router , to bind routes to the baseurl. If you adpot this impementation you can always use your own structure.

Next let’s look at the base-router-handler which is the base to all route handlers. It will contain most of the code for any endpoint:

I wanted to use generators, as I like their async / await like structure. So I wrote a co-wrapper file that will handle errors and the generators’ routes correctly, including wrapping with a promise. I do not wish to go into depths explaining it, as it’s not the point of this post. But you can see this file, in the git repo.

Next we create the base constructor, which takes the route name and (?). It creates the binding to a collection / table / service / anything else you want. It also calls the middleware setup; if you wish to bind your route based middleware, you can override this function in derived classes.

Next I go through and create static route handlers for each route. As you can see the route handlers are pretty simple: take json in, perform some DB operation and return the result. In other examples you might have more complex behaviour. The nice thing is the base creates a default behaviour, but by overriding the static methods in dervied classes we can do whatever we wish to do.

Once the baseclass is ready we can now create a real route, that will do something!
Let’s create a ‘route-handlers’ folder inside the ‘routes’ folder and add a file called companies.js.

First look at how easy it was to create a new route. We didn’t need to write even this much code. We could just create the constructor and be done with it, if we wanted the same behaviour as the base class.

I did want to show, though, how easy it is to override the code without requiring much work. The base class provided us with a basic implementation for notImplemented[is “basic” an adjective instead of a specific type of implementation?], which makes it easy to disable routes.

Even adding a route is easy. Just add a handler implementation of your own. Makes it easy to test just the functionality and not have to re-write the same code over and over.

That’s all for now!

Hope your enjoyed this, or found this useful.

concurrency issues

Watch out for globals in node.js modules!

globals, or global variables are known to be risky.
However using the ‘var’ keyword should ensure file level definition.
As such shouldn’t it be safe to use module level variables?

The answer is no, and it should be avoided at all costs.

why module level variables are bad?

Node require will wrap your module with a function as follows:


~ $ node

require('module').wrapper
[ '(function (exports, require, module, __filename, __dirname) { ',
'\n});' ]

The calling node will assign to these arguments when it will invoke the wrapper function.
This is what makes them look as if they are globals in the scope of your node module.
It seems we have globals in our module however:
– export is defined as a reference to module.exports prior to that.
– require and module, are defined by the function executed.
– __filename and __dirname are the filename and folder of your current module.

caching – a double edge sword

Node will then cache this module, so the next time you require the file, you won’t actually get a fresh copy, but you’ll be getting the same object as before.
This means you’ll be using the same global modules variables in multiple places, which means danger!

Here is a code example that illustrated the problem:

I’m running here two calls to the same function, with a small delay between each call, after a few runs we will notice that the function will run over each others variables. Which is an example of a module global issue.

How to solve globals?

There are multiple potential solutions to this global issue, I’ll present you with two potential solutions

Solution 1 – Functional

If we define a local scope inside our module, we can return a new set of variables for each run.
We will use a ‘let’ keyword, along with a scoped function (not needed, but nicer and better scope control).

Solution 2 – use Classes

We can just define a class then create a new class for each run.
This way each variable is a private member of that class, ensuring proper encapsulation.

For complete code have look at this repository:
https://github.com/CoreTeamIO/globals-in-node-modules

aysnc JS

Waiting for things the JavaScript way…

JavaScript is filled with an abundance of libraries, frameworks, and acronyms that would make any conversation between two web developers sound like they are about to fly a spaceship to colonize Mars.
If you don’t believe me, check out this funny post:
How it feels to learn JavaScript in 2016
[If this post gets a high bounce rate I suggest deleting the rest of the paragraph after Mars, on account of the link]
As such writing Async JS is no different or less confusing.

In this post I’ll try to bring clarity to asynchronous code in Javascript. I’ll focus on back-end node.js code, but a lot of it also applies to the front-end.
Let’s first cover async JS mechanisms we have in Node:

  • Callbacks
  • Promises
  • Generators
  • Async / Await

I have not included things like observers, async.js and events, as they are not exactly the core of JS. For example, events rely on an async js mechanism (such as callbacks). Many of the observer mechanisms are used mainly in front-end patterns today, and async.js is an external library which I stopped using. However if you want to learn more I suggest you look these up.

Callbacks

Callback functions are the most basic types of async code, and are common not only to Javascript but to many other languages.
Callbacks are simple to understand. Callbacks are simple functions passed as arguments, that are called when the called function is finished.

Very simple and straightforward. The main problem with callbacks is that when these are all chained together, as many operations are in async, you’ll end up with loads of callbacks which is a nightmare to read, manage or follow. This is called callback hell.

Promises

Promises are a different way to handle asynchronus code that allows for easier managment of async code, yields easier code flow, and uses exceptions for errors, uniform signatures and easy composition, meaning we can chain promises together!

Promises are a bit like real life promises. Imagine your boss promising you a promotion next quarter. You don’t know if you’ll get it or not, and you’ll know that only in the future. Promises have three states: resolved, rejected and pending.

A promise constructor takes two parameters, reject and resolve, which will be called when the promise finishes and returns a chainable promise object.

This might look more complex, and for very simple situations you might be right. But let’s look at the chainable .then and .catch (for success and failure of a promise).

As you can see this allows for chaining of promises, which creates sequential code. Sweet!

Prior to ES6 promises were supported using external libraries such as Bluebird, Q , RSVP and many others. However they are now also a part of the coding lanaguge, as promises are that important.

Promises deserve a post of their own so here is some more reading if you want to dive in and understand them better:

Generators

Generators are not designed to be an asynchronous mechanism per say. Their intent was to create an iterator-like functionality in the lanaguge; however they are often used to create cleaner looking, synchronous-like code. This is built on the fact that generators can be paused and resumed. Once again generators deserve a post of their own, so I will add additional reading links at the bottom of this section.

Generators landed in ES6, and can be created by adding a ‘*’ after the function keyword (or before, in class members):

The nice thing about generators is that inside a generator function you can pass the control to another generator *yield or to a promise / value with yield:

As you can see you can the code becomes simpler. You can even wrap a generator into a promise easily with a coroutine (Bluebird has a coroutine, for example).
As you can see, promises and generators co-exist nicely!

Here is some further reading, if generators are still not clear:

Async / Await

Async/Await is not part of ES6 sadly, but only ES7. The use of generators and promises, while nice, is not very clean. It requires a lot of wrapping, and the intent of generators was to provide an iterator, not an async mechanism. This is where async / await shines, as it is a cleaner way to handle promises and asyncronous code in a sequential manner:

All you have to do is define an async function (with the async keyword), then enter an await keyword from your promises, much like the generator yield, but with less mess:

As you can see the code is clean, but didn’t require any wrapping, or using generators. Adding just two more keywords allows us to use promises everywhere (promises tend to be faster than generators).

Further reading:

Fun Learning about JavaScript and ES6!

Last week the guys at Dev.BG invited me to talk about JavaScript and ES6 Features. It was a lot of fun and I’d like to share some of the pictures from the event and the presentation!

dsc_7109

dsc_7138

dsc_7125

dsc_7115

dsc_7106

14713605_217423008676926_3774777425349225610_n

14708370_217422985343595_423333587942732353_n

14642374_217423155343578_7318550587700001399_n

dsc_7148

dsc_7147

dsc_7137

dsc_7127

dsc_7125

dsc_7155

dsc_7101

dsc_7109

dsc_7137

14650141_217423055343588_3723962161275024831_n

14713587_217423092010251_2731930216029834874_n

Presentation in PowerPoint Format

Presentation in PDF Format

The love story that brought me to Sofia

Most times I write tech posts, or professional posts, this time I decided to write something more personal.
I’ve been asked why I moved to Sofia countless times that I decided to write a blog post about it.

The short answer: FOR LOVE!

The long answer:

Israel

In 2007 I moved back to Israel after many years in Australia. I didn’t know yet, but I was about to spend the next 7 years there working with various startups. Some of my endeavors were more successful some less, it was an interesting journey.

In 2012 I join a startup as a co-founder, responsible for marketing, sales and biz-dev. The company had 25 products and I recommended we focus on one. As I started to market it, I saw the product was lacking and knew that with the right app we would get acquired. So instead of marketing I ended up being a developer and weekend marketer.I spend 90% of my time writing extension for browsers in C++, building mobile apps, and websites. And not long after, as I predicted we got some interest, and I was able to negotiate a very impressive deal to get acquired.

I was happy and proud, I felt I made it. However in a short time, everything unraveled, the company had structural issues (I had daily discussions about this with my co-founder much before the acquisition offer arrived, but I was a minor shareholder and joined him, so it was impossible to put my foot down) and at the same time my engagement to my girlfriend quickly deteriorated. I was even able to raise some funding to try and save our company, but it turned out the investors weren’t honest with us. It was a nightmare, and I ended up hitting rock bottom without a company, and a broken heart…

SF, Europe and other places

I was in a constant self-debate what to do next, where to go. I was thinking of moving to SF but wasn’t sure I was ready for that. I started working with a cool company in SF, building various systems for them. I consulting and travelling between Israel and USA and various other places as I was working remote.

Belgrade

In one of my travels I had to go to Stockholm to work with a designer. I happen to miss the direct flight out of Israel, and the only other flight out connected through Belgrade. I had a 9 hour overnight connection and remember this crazy Serbian guy that lived in Tel Aviv. We went out that night and at the end I was captivated. Amazing nightlife and the incredible Balkan women, so stunning and friendly, exactly the cure a broken heart needs!

On my way back from Stockholm I booked two weeks in Belgrade to investigate the matter further. I found Balkan people (Serbian only at that stage) very nice, helpful, friendly, and intelligent. I decided I have to check the local tech scene, as I’m a geek. I booked meetings with various companies, startup accelerators and what not.

Coming from SF and Tel Aviv I have certain expectations, or world concept of what tech companies look like. While visiting the various companies I notices some do seems have a fun work environment and offices, but most had developers stuff up in rooms, like they were cattle. Many of the offices were gray and sad, I literally felt like this is the place that souls go to die. 95% of the companies were doing outsourcing and the tech community was bootstrapping itself. I was excited to have found an interesting opportunity!

I decided to take a brave step and move there. I flew back to Israel packed my things, and in late 2014 I gave back my flat, sold my car, packed all my possessions and moved to Belgrade. I decided to officially move to Serbia, but still visit Israel, the US and several other places until I decide for good where I’d like to spend the next few years of my life.

In November 2014 I left Israel and landed in Belgrade. I was house hunting, connecting with people and understanding the scene. During my first week, a friend of mine organized a fun night out. She also invited this girl who was about to be her roommate and a setup for me.
That night my future’s friend roommate and I, really hit it off, and we are together ever since that first night. Dragana turned out to be a great listener, always there to support me and be there for me as I come and go from Serbia and get obsessed about building products and technology discussions.

In 2015 I was building teams in Serbia, hiring people and flying all over, trying to decide if I can stay to live in Serbia, move back to Israel or move to the USA. I was living in Belgrade, but I was in something like 10-12 countries. It was super difficult and our relationship started to take the toll.

I was considering staying in Serbia longer, as we already had an apartment there and I did work with people in Belgrade, but the more business I did there, the more I realized how hard it is. While I found great people, the legislative restrictions and the government that make is very difficult to do business, I just couldn’t live there for the next 5 years, as much as I wanted too.

So what do you do? You have an amazing girl you’ve found and you want to build a life with her. You also really like the Balkan but cannot do business in the place you live? You look for alternatives.

Sofia

And that’s when Sofia came into the picture. I’ve been to Sofia many times, had friends there and knew the tech community was super active. Dragana was looking into exchange programs and Sofia was close enough to Belgrade that she can continue coming back and forth. Sofia is also going through some transformation, the city is re-developing, and I’m seeing many tech guys starting to build products and not just services. It’s very exciting to maybe be part of such a community. I also knew it was EU which made it lots easier to do business with and on top of all that, the best ski resorts in Eastern Europe (It’s now almost in June and I’m in a ski resort, I’m an addict). So late 2015, we decided we are going for it, and will move permanently to Sofia.

It took a lot of research on my end, and lots of paperwork, frustrations and nerves but in early April 2016 our home was finally only Sofia, and not any other place. So far I’m really impressed with Sofia. Sofia has 1% unemployment, loads of very talented people and you feel the tech community is growing. I’m very optimistic about the future in Sofia!

I would like to sign this post with a big thank you to my girl. She is always there for me, listening to me talking about all sorts of ideas, work, my constant travels, my self obsessed workaholic nature (I tend to work 12-15 hours a day), and at times I forget about her, yet she is there to take care of me, and give me lots of love and support. Thank you my love!

How to Remote Debug Node.js

Finding and fixing bugs is not always easy, especially if someone else wrote the code!

I know that engineers in general have NIH syndrome, but I am one that doesn’t share that view. Technology is an enabler, meaning it’s not an end goal, it is there to provide a service (or at least that is how it is most of the time).

As such, we must sometimes make fixes to our code, or to other people’s code, and that requires debugging. I’ve seen many people use console.log/logger/printf – heck, sometimes they even suggested that I do it that way. But as much as I enjoy waterboarding myself, I’d much rather use a debugger whenever I can. Debugging a node.js project isn’t complex, it just requires a little bit of setup, after which you can debug a local app or even a remote production/staging/test environment.

The first step is to run node.js with the special debug flag and the optional port:


node --debug <-- run node with debugging on port 5858

node --debug=4455 <-- runs node with debugging on port 4455 (you change the port)

node --debug-brk <-- run node with debugging and will break on first statement

If you’re using gulp/nodemon etc, be sure to include those flags in a separate task and/or pass the relevant params to your node app.

// Nodemon task
gulp.task('remote_debug', function () {
return plugins.nodemon({
script: 'server.js',
nodeArgs: ['--harmony', '--debug=5577'],
ext: 'js,html',
});
});

Then you can launch your app, or do it via the task, and your node.js app is running and allowing any debugger to connect to it.

You can use any node.js you choose. I personally use phpstorm/webstorm. While it’s not a perfect product and has some issues, I’ve had very successful debugging sessions with it, and I’ll try and outline how to set that up.

First install webstorm/phpstorm. Both IDEs are great and very similar, except Phpstorm also allows you to edit and work on PHP files, whereas Webstorm mainly concentrates on JS files and web files.

After the install, launch the app and go into the plugin:

Go to File->Settings and in that screen click on the plugins menu item.

plugins_screen

The click on the “Install JetBrains plugin…” button and in the new window either scroll down or search in the top search box for NodeJS plugin.

Once the install is finished, you should have NodeJS installed and you can go ahead and open your projects directory in the IDE. (File->Open Directory, obvious I know, but still… 😉 )

In the last step we need to configure the remote config for our node project.

Click on Run -> Edit Configurations… Menu.

And Click on the + button and select Node.js Remote Debug.

node_remote_debug_new_config

Then in the main window just setup the server address and port (this can be used to debug a remote machine or a local machine)
And you’re all setup to start debugging your server!

node_debug_local

Then click ok, select the configuration from the top right-hand side menu and click on the little bug icon button:

remote_debug_launch

At this stage you’re up and running. If you look at the bottom debug tab you should see you’re connected and then you can put a breakpoint anywhere in your code and solve any bug you come across like a hero (at least in theory! 🙂 ).

***** Important note *****

While Phpstorm/Webstorm is wonderful, I’ve had some issues with debugging performance. This issue relates to some settings in the software so to ensure you do not get frustrated waiting for the first breakpoint to hit, I would suggest you configure Phpstorm/Webstorm as follows:

1) Click on help -> “Find Action” (ctrl + shift + a)
2) In the searchbox type: Registry.
3) Then start typing (or scroll down) and find
js.debugger.v8.use.any.breakpoint – turn off

Happy Hunting!

What is the secret to successful remote software engineering?

My recent experience is that many companies insist on having engineers on site. When they hear “remote” or “not in the office” many people have a very negative perception. They either believe it’s cheap labor or they believe they require people to come into the office each day in order to get good results. While I do understand the bad experience many companies have, this is not always the case. Many are highly successful with distributed remote engineers, or even a remote team. While there are many places with highly talented engineers all over the world I see again and again companies that insist on hiring people only from the local eco-system. And it’s true that there are certain skills that only exist in Silicon Valley / Tel Aviv / NYC and other places where people have successfully built large companies, however a large percent of the work can still be done in a different place where the talent is more loyal, at a lower cost, while not sacrificing the skill-set of the people. It’s very difficult and expensive to hire engineers in SF or NYC or TLV, and as there are so many offers for talented engineers there, retention becomes just as hard as recruiting.

I’ve been highly successful at finding and retaining talent world-wide. I’ve also been working with companies for around 6 years, remotely. Either personally for my own start-up or providing development services for companies. I’d like to share my thoughts on what are the secrets to making such an environment flourish.

My experience with remote teams

Today my time is split between the US, Israel and Eastern Europe. I’ve been working for the past 6 years or so in and with remote environments and teams. I’ve used remote teams to build a complex password manager running on multiple web and mobile platforms, and in 4 years it has reached over 70,000 paying customers. I’ve also been successful at building products for US companies with teams in Eastern Europe and getting results using the latest front-end and back-end technologies.

Working in a remote team as an individual

When I first started out, I had doubts: how does this remote thing even work, if at all? While I’d heard of companies doing it, up until then I was used to waking up in the morning and going into an office. At the time I’d just started working with my new co-founder, with his company that has sold over 3 million dollars of mobile software products, and has worked with over 20 developers from all around the world. I was fascinated by this. Slowly but surely I saw the way he works with them and why he was so successful in doing so. It actually took me a lot of effort to get him to start meeting regularly (as we lived 2 blocks away) and we ended up meeting once every 3-4 weeks in person. We worked night and day and would communicate via skype, email and other methods. We built an amazing product together and got some great offers for partnerships and acquisitions.

Working with a mixture of remote and local teams

For the past 2 years I’ve been working with US-based companies, where most of my development work is done either by me, or by using teams of people in Eastern Europe & the US. I’ve built products and I know that there is a clear difference between a remote single contributor and a remote team. Remote teams are very similar to regular teams, except you might have other people in other countries as your co-developers, product managers, or product owners, and you must manage this process. There are many similarities to being in a remote team and being a remote single contributor. I am not going to go over the differences as I want to focus on the core elements of working with remote teams / single contributors and what is common to making any remote environment work.

The secrets to making remote work

Finding good engineers is hard, no doubt. However using good engineers remotely requires the remote team or remote lead person on that team to have additional skills in order to make it work.

  1. Be Proactive & Driven – This is the single most important quality for any remote engineer / remote team manager. The reason is that when someone is sitting in the office, you can instantly see if someone is not engaged, or stuck. You can just tap him on the shoulder and ask what’s up buddy? Is there anything I can do to help? What are you working on? etc. In remote teams that is not possible, so you need to ensure the person on the other side, and possibly in the other time-zone, is proactive. He will get on call at strange local times, he will email you that something isn’t working. He will flag that he finished his tasks and needs more work, or even let you know that while you’ve planned it before, seems he is finishing early. He will be the type of person tapping himself on the shoulder and not requiring anyone to chase him. EVER! This type of person will make or break your remote / outsource / not in the office work environment.
  2. Resourceful – Resourcefulness goes hand in hand with being pro-active. When working in a remote team, many times you will be faced with integration issues. Integration issues are the ones that take up a lot of time. The back-end RESTAPI that is suppose to return X returns Y. Break. Your mobile app / front-end app cannot read / write the data and the work cannot continue, or perhaps it can? While the proactive perosn would raise the issue, a resourceful one would also find a creative way to continue his work. For example, many times I will create mock data / a mock server when I can’t get the back-end to work. This can mean the difference between 24-48 hours delay in the work, to zero down time, or just 1-2 hours to fix a bug. A resourceful person will find an alternate path to continue his work, create solution to a problem or just move to another task. Resourcefulness is highly important for any engineer, but in remote teams it is vital as it can be the difference between making the remote team work, and reaching the conclusion that remote teams do not work.
  3. Understand Product – Finding a good engineer that can also understand product is very difficult. However when working remotely this not only becomes a nice to have, it becomes vital. Understanding product means thinking in terms of user experience, and what is the easiest and most intuitive way to use the application. Many talented engineers can produce great code per requirements or spec, but do not think in terms of what the user needs. When this happens in house, the product lead can very quickly do a course adjustment: “Hey, I thought that would work, but on second thought let’s scratch that and move this button over here.” With remote teams these iterations might take more time, and so it’s important to have someone you trust that would adjust the course himself. Someone who would understand what the “real requirements are” or what the functional requirements are, and build the right usability for the user. Even if not perfect, then the product person would have a much smaller adjustment to make. Understanding product is not simple, but once you find the right person that can do that, you’re setting yourself up for success with remote teams / engineers.
  4. Result Oriented – Most people hate micromanagement, and while sometimes management does need to intervene in the remote environment, this becomes almost impossible. That is why in remote environments, your engineer / lead must be result oriented. He is not focused on completing a feature, or getting his “workload” ticked off. He should be focused on making sure your business goals are achieved, and that his part is playing it’s role in the global scheme of things. A result oriented person would ask about your business deadlines, when do things need to be done by, and why. This means that person is not about just counting the hours worked, but about making sure he is helping you get to where you need to be.

TimeZone Issues

I’ve worked with teams in many time-zones, and when I meet new customers they always raise that concern. I would like to use the end of this post to crush any time-zone concerns people have. Is having developers in different time-zones a challenge? Sure it is! Does it mean it won’t work? Not necessarily. If you’ve found a good engineer or engineers, that have the list of skills I’ve mentioned, you won’t be suffering from time-zone issues. These types of people, with these skills, are leaders. They will work at many times that overlap with your hours, they will be answering emails at 2am in the morning their time, they will jump on call at strange hours as they commit to your success. Furthermore, how many times do you really need to talk to your engineer 8 hours a day? Most of the time you’d rather not do that, as if you are, you might be hurting your own performance at the same time…

I’m a big believer in remote teams and when done right they are a wonderful asset. The right team / person can build you amazing software that works very well. It’s all of matter of understanding how to work it, and what to look for. I hope this helps and feel free to contact me if you have any questions about creating a successful remote software team.

Authenticating with Ember-Simple-Auth

Recently I’ve setup an ember project and I needed authentication. There is an excellent library / cli plug-in called ember-simple-auth. While the guides are great, the most recent version of Ember(1.13 / 2) doesn’t play nicely with the latest ember-simple-auth.

Ember-Simple-Auth, provides a special branch called jjAbrams (PR 602). While it is an awesome library, getting it to work can be somewhat tricky as not everything is as documented and requires some tweaks here and there. I’m trying to outline what I did, in the hopes it will save time for many others. And help prevent other devs from banging their heads against their keyboards, posting issues on git repositories or irc channels, or reading simple-auth source code to understand why things don’t work they way they are suppose to (as I did) in the hopes of understanding why things don’t work. Especially if you’re running against a custom server, like I did.

Here are the steps to get it working.

<

ul class=”regular”>

  • First create an ember app using ember-cli

  • Then follow the instructions on ember-simple-auth, how to get this specials build. It’s rather simple but still you need to make sure you don’t have any of the ember-simple-auth / simple-auth packages in your package.json / bower.json and also delete them from your node_modules / bower_components directories. Here is the pull request for it (make sure to read it as it explain how to get setup). https://github.com/simplabs/ember-simple-auth/pull/602
  • Next add a Login page and login controller and a protected page (notice I’m not using the LoginMixin as in many of Ember-Auth examples as it states it’s deprecated and I’ve also chosen not to use the Unauthenticated mixin (just because I’d rather leave that always accessible).

  • edit the login controller file:

    and the protected page route

    then edit the login template (I have some bootstrap styling here)

  • Next create an authenticators directory and an custom.js (authenticates login requests and restores session on refresh)
    Notice I use the same name everywhere (access_token) as once you resolve when authenticating it stores all the info in the session.

  • Then add an initializer for the authenticator

    and the file itself:

  • And now create an authorizers directory and a custom authorizer
    Notice that the info is is accessible via that.get(‘session.authentication’) (not session.secure as in other documentation!)..

  • Then an an initializer for the custom authorizer:

  • Here is a Git Repo with a working example https://github.com/DoryZi/SimpleAuth

    This hopefully will get you up and running! enjoy 🙂

    Choosing an MVC framework: Why I picked Ember & why it was an awesome decision

    When I started working with Originate on a new project, I was given the freedom to choose my own stack. The CEO’s requirements were simple: create an MVP to replace the current process of resource planning that we use today. This meant creating a replacement for their Excel based process using a web app. I told the CEO: Don’t worry, I’ll get something out to you in two-three weeks.
    So there I was, staring at my computer, having absolute freedom to choose my tools. I should’ve been happy.

    I wasn’t. I was stressed. I really wanted to impress them, to use the latest technologies.

    I knew JavaScript very well, and I’d done a few node projects, so I picked that. I didn’t have too much relational data, so I picked Mongo.

    But which front-end frame to use to make a snappy single page app?

    The Decision process:

    I really wanted to pick the best one, but as getting hands-on with a framework can take anywhere from a few days to weeks, I just didn’t have that time to spare.

    So which one should I pick? Angular? Knockout? Backbone? Ember? Meteor? I was in a bit of pickle.

    I’ve used many technologies over the years but I hadn’t really experimented with Angular, Knockout, or Ember. I’ve used Backbone, but it’s just too messy for my taste and not really structured.

    So I started reading about all the technologies.

    The instant ‘No’s:

    • Meteor seemed amazing – really smart people building such a great concept. The idea of front-end and back-end in the same system seems incredible. I actually wrote similar code in one of my past projects to move models from the back-end to the front-end, so I instantly connected with the concept. However, it was still very early days for Meteor and no big production environment was running it at the time, so it was out. After checking on them recently I notice they seem to have made great strides and are doing very well. That’s one tech that I’d keep my eyes on personally.
    • Backbone – I never liked Backbone that much and I’m not a huge Marionette fan either. I just felt it’s too messy for my taste.

    Now there was the tossup between the ones that were left:

    • Angular– the most popular one at the time, with lots of support due to it being a Google product. However, I’d read complaints about it using its own idioms and engineering concepts that one must learn in order to use it. Still, it was the most popular and most commonly used MVC, so I was not going to dismiss it easily.
    • Ember – As I was reading about Ember and doing my background investigation, I leant that out of all the frameworks Ember is the most opinionated and that using it requires following a very particular structure, whereas Angular is more configuration based. I also learnt that Ember is stable and used by quite a few serious companies.

    My decision wasn’t simple, but ultimately I picked Ember for two main reasons:

    • Ember and Angular seemed to me the most stable, and the ones that have some serious customers using them in production.
    • Ember’s convention over configuration. Ember forces you to follow a particular structure, which can be a very good thing as it solves many of the engineering decisions for you. Plus, no one has to be the “code police” that makes sure you put files in particular directories, or call classes with certain names. I liked that, and since I’m not only the engineer but also the project manager and team lead, and needed to put more developers from various time-zones on the project, I thought that would be awesome. Boy, was I right.

    Getting Started With Ember:

    Now, once I’d chosen my tools it was time to start getting some hands on experience in building my MVP.

    Naturally I started on the Ember official tutorial. However their tutorial was based on used Ember-Data. Ember-Data is an ORM like layer that is separate to Ember. It provides the front-end with models, and all related operations and uses dependency injection to inject an object called the “store” into all parts of an Ember app. While the concept is wonderful, at the time I was starting out Ember-Data was separate to Ember, and they had mentioned it would soon make it’s way into the main branch. As such I decided it was too big of a risk to rely on it and developed my own data layer, using my own store. At the time of writing this post it has not made it’s way in, and went through many breaking changes. As such this decision was spot on.

    The first week with Ember was painful. The concept of convention over configuration means there is the Ember way, and the Ember way. Trying to do things in a none-Ember manner just creates frustrations and will get you no where, so the best way is to understand what Ember expects and to do it that way. For example, let’s say you’d like to update certain elements, hide or show etc. While in traditional jQuery you’d just bind to an event, the Ember way would mean you should add an action to some element.

    Ember a couple of months in:

    Once you do understand how Ember works, and what to do, you become super productive and you can crank out code in a fraction of the time it would otherwise take you.

    Ember has a very particular way for all objects in the system to interact. Ember gives you an excellent router, with each route having a very particular point in which it loads the model (Either stand-alone model object, POJO, or the ember data models), a controller object that wraps around your model, and can provide various computed fields, or connect various models together, handle actions and more, and the view that wraps the templates, which uses HTMLBars (ex. Handlebars) templates. Here is an anatomy diagram:

    Ember_Anatomy

    Ember really gives you a framework to do great stuff in a very easy to use manner (once you understand the manner as it’s perceived by Ember); I found myself completing whole pages, and complex UI components, in lighting fast speeds, with it being re-usable in other places later on. Once you’re up to speed with Ember, you’re going to love it (or at least I hope so).

    Ember Decision Epilogue:

    My Ember journey has been very exciting. It started with Ember 1.3 and now I’m Migrating to Ember 1.10 with Ember-cli. I have written a complete data layer for my Ember app, and the application today is 150,000 lines of code and used by a significant company in production. I have gone through performance optimization and made a few interesting observations about Ember:

    • Us geeks tend to love technologies for technological reasons. But the reason I love Ember is from a business perspective – convention over configuration. This means that you can move new engineers on and off a project, and as long as they understand Ember they can start to be productive very quickly.
    • During my work on this project, I assisted the company with implementing a new consulting agreement. Having the customer use Ember made it very easy for me to detect design flaws, and to do isolated re-factoring, rather than a complete throw away!
    • The company had another codebase that everyone suggested was a throw away. The code was indeed poor in many ways, but since it was Ember it was easy to salvage, to re-factor and shift snippets around, to make it usable and re-factorable, saving the company months of work.
    • I have hired a few engineers to work on this project, from multiple places around the world. I was amazed at how fast they have picked up Ember, and even though they are junior, Ember has allowed them to be highly productive from day one.
    • The Ember-Cli build system is an awesome addition that takes care of so many things, there is no other MVC framework that provides it.

    I’m finding more and more companies are using Ember, and I’m the big advocate of convention over configuration along with Ember. True, there is the downside that Ember has forced its users into making constant re-factors to it’s code-base, but I suspect this is still better than Angular 2.0’s throw everything away! Also when using the latest and greatest you should expect things to change, and overall Ember provides developers and product owners with lots of bang for their buck. I think it’s more complete and my #1 choice for CRUD like application, front-end MVC.

    <

    p class=”page-break”>Thanks for reading and I hope you’ve found my experience helpful.
    Cheers
    Dory 🙂