Load Testing LLM Traffic Locust + CustomAgent

Load Testing the OpenAI LLM Completions Endpoint with Locust

Hey there! Ready to put the OpenAI LLM Completions Endpoint through its paces? Let’s dive into how you can load test this bad boy using Locust, the OpenAI SDK, and a custom router. Don’t worry, I’ve got your back every step of the way. Let’s do this!

Step 1: Install the Necessary Goodies

First things first, we need to grab some tools. Open up your terminal and run:


pip install locust openai

Boom! Now you’ve got Locust and the OpenAI SDK ready to roll.

Step 2: Set Up OpenAI SDK

Now, let’s tell the OpenAI SDK who’s boss by giving it your API key:


import openai
openai.api_key = 'YOUR_API_KEY'

Don’t forget to replace 'YOUR_API_KEY' with, well, your actual API key.

Step 3: Create a Custom Router

Time to get fancy with a custom router. This little buddy will handle all the requests to the OpenAI completions endpoint, including retries and collecting detailed metrics.

Custom Router Class:


import openai
import time
import logging

class OpenAIRouter:
def __init__(self, api_key):
self.api_key = api_key
openai.api_key = self.api_key

def get_completion(self, prompt, model="text-davinci-002", max_tokens=50):
response = openai.Completion.create(
return response
except openai.error.RateLimitError as e:
logging.warning(f"Rate limit exceeded: {e}. Retrying after a delay.")
time.sleep(5) # Back off for 5 seconds
return self.get_completion(prompt, model, max_tokens)
except openai.error.OpenAIError as e:
logging.error(f"OpenAI error: {e}")
return None
except Exception as e:
logging.error(f"Unexpected error: {e}")
return None

This code ensures we’re handling errors gracefully, like a pro.

Step 4: Integrate the Custom Router with Locust

Let’s plug this router into Locust. Time to unleash the power!

Locust Test Script Using Custom Router:


from locust import HttpUser, TaskSet, task, between
import logging
from openai_router import OpenAIRouter

class UserBehavior(TaskSet):
def on_start(self):
self.router = OpenAIRouter(api_key="YOUR_API_KEY")

def test_completion(self):
prompt = "Once upon a time" # Mix it up in a real test!
response = self.router.get_completion(prompt)
if response:

class WebsiteUser(HttpUser):
tasks = [UserBehavior]
wait_time = between(1, 5)

def on_start(self):

Step 5: Run the Test

Alright, let’s light this candle. Run the Locust test with:


locust -f locustfile.py

Fire up your browser and head to http://localhost:8089. Configure the number of users and the spawn rate, then sit back and watch the magic happen.

Step 6: Monitor and Analyze Results

While Locust does its thing, keep an eye on:

  • Response Time: How quickly are we getting answers?
  • Success Rate: How often are we hitting the mark versus crashing and burning?
  • Throughput: How many requests are we churning through per second?

Locust’s web interface will show you all this in real-time. It’s like watching a thrilling data-driven movie!

Step 7: Optimize and Iterate

Found some bottlenecks? Time to tinker:

  • Scale up your resources.
  • Tweak your prompt handling.
  • Improve network configs.

Run the tests again to see if you’ve made things better. Rinse and repeat until you’re happy with the results.

Bonus Tips for Smooth Sailing

  • API Rate Limits:
    • Respect the rate limits, or face the wrath of throttling. Implement client-side rate limiting and handle those “slow down” messages gracefully.
  • Resource Management:
    • Don’t hog all the resources! Run tests in an isolated environment or on dedicated hardware.
  • Scalability:
    • For massive loads, go distributed. Use a master node with multiple worker nodes to really push the limits.
  • Data Variability:
    • Mix up your prompts to simulate real-world usage. Don’t be that person who only tests with “Hello, world.”
  • Logging and Monitoring:
    • Log everything! Monitor everything! Use tools like Grafana and Prometheus to keep tabs on performance in real-time.


Using Locust with the OpenAI SDK and a custom router is like having a supercharged toolkit for load testing the OpenAI LLM Completions Endpoint. Follow these steps, keep tweaking, and you’ll ensure your endpoint can handle whatever you throw at it. Happy testing, and may the load be ever in your favor!

software developer mentorship

The broken state of software developer mentorship

Software developer mentorship today has issues. Hiring junior software developers and mentoring them to become mid and senior developers is an important goal for many if not all companies. In an ideal world, we would see a lot more mentorship happen in practice. However, we do not live in an ideal world.

In the real world, some software developers have limited mentorship and support in their workplaces. The sad reality is that most junior software developers must figure things out on their own. Today I’ll strive to identify the main constrains that hinder software developer mentorship and how to tackle them.

Why is Software Developer Mentorship Broken?

For one, junior, mid, and senior software developers must complete tasks on each sprint to move up. This creates time constrains and limits mentorship and learning times. Today we’ll discuss this conflict. We’ll also explore the current state of software developer mentorship, why it’s broken, and how to fix it.

As you read this post, most junior software developers are left out there to figure things out on their own. Either that or they get very limited software developer mentorship and support in their workplaces.

Mentoring junior developers is a novel idea. I wish that a lot more of it were happening. Companies want to invest in developer mentorship, but sadly there are many reasons preventing that from happening in practice.
There are some very serious and common issues. While some have to do with the individual software developer’s personality, most can be fixed.

I’m very confident that it’s the companies themselves in a lot of cases that are the main obstacle preventing software developer mentorship from happening. Here are some common constraints or limitations that hinder successful mentorship from taking place:

  • Company culture – Short-term business goals
  • No manager – The senior software developer predicament
  • Developer personality/ego

Company Culture – Short-term Business Goals

The biggest hurdle in software developer mentoring is the company itself and its culture and support for setting up a mentoring framework. While most places when hiring will ask senior software engineers how open they are to mentoring junior developers or to tell them more about how they mentor junior developers, among similar questions, very few companies in actuality create a supportive culture of cultivating learning that can allow junior developers to grow. Most companies don’t wish to hinder the mentorship, and the main reason for a lack of mentorship is business goals.

The Software Developer Mentorship Killers

  • Scrum & Velocity – Many companies today work using Scrum, which means each developer needs to complete certain tasks or work items in a given sprint. While this allows companies to align business objectives nicely in dedicated sprints and measure their team velocity, it also creates a big downside. It directs junior, mid, and senior software engineers to focus on their short-term sprint-related tickets and on getting things working. They are measured against that, and the focus shifts to completing the task rather than on longer-term goals. This means that they push teaching (helping other developers) and learning (re-writing your code) aside to a lower priority. Even when teaching and learning do happen, I’ve found many junior or mid developers to be less open to feedback or re-factoring of their code. They’re more set on wrapping up their work and moving on to the next priority.
  • Kanban with Sizing – While it’s more apparent in Scrum, even in Kanban when using measured tickets, which many companies do, the software developer is focused on the estimated time for that ticket, leaving little time to guide someone else. Yet again, focus shifts towards delivery rather than towards learning and re-factoring.

How to Fix It

The biggest and hardest fix for this situation is a shift in company culture and thinking. The company has to compromise and set up a framework for software developer mentorship success.

There a few ways to achieve this. They all require change, compromise, and focus on long-term individual software engineer success, not just on features and instant gains.

I’m not saying that the company shouldn’t produce and be agile and fast to iterate, but the company has to make certain investments in their developers.

Starting is Easy!

The good thing is that you don’t need to introduce this company-wide from day one. You can try it with a small team or even a sub-team. Try to experiment with even two developers, a junior and a senior. It’s very simple to begin. Here are three tangible ways to do it:

  1. Mentor Tickets – Create specialized mentorship ticket tasks. Just
    like there are “story”, “epic”, “bug”, “spike”, and “task” items, I
    suggest we also add “mentor” type tickets. These are work items that we
    expect a younger developer to take on and learn together with a senior developer. These should be real tickets of work that need to be done. However, these tickets would be different in many ways:

    • These are tickets for two people. So, if you’re using Scrum, you can have the developers split it up to mentor and mentee tickets.
    • The time estimates of these tickets are flexible and might require double or triple the time of a regular ticket
    • You can easily expect these tickets to include one or more re-factors during the ticket.
    • These tickets can also be broken down further if a senior gives “homework”, such as tasks to investigate and learn.
    • The delivery of these tickets is just as important as the learning process and the growth achieved through these tickets.
      At a later time, I’ll discuss how to approach and what to do in a mentorship ticket and how to conduct a successful mentorship.
  2. 10% Improvement Ticket/Time – For those that prefer a more casual approach, they can allocate 10% to improvement/growth time. In this approach, a ticket, task, or time is allocated 10% of the time in each sprint. This time allocation allows developers to learn, read, and mentor each other. It might include things like reading about a specific design pattern and trying to implement it along with a senior member. Using this approach, you essentially allow your team to decide how to spend this time and with whom.
  3. Good Team Mixture – Another issue is your team mixture. If you have five senior software developers and one mid, that might not be the ideal setup for mentorship success. Such a team would be wonderful to work together on tight timeframes and urgent complex tasks, but to facilitate learning, you should create the biggest knowledge gaps. Have a ratio of at least two seniors to one junior software developer. Avoid too many mids (or ideally any mids) in this group.

    Ideally remove mids from the mixture to start with

    Mids tend to feel they are just like seniors and might resist mentorship. On the other hand, that might misguide juniors with a lack of experience or knowledge. The best thing to do is to have a large enough group of seniors that can still produce results and have juniors that are eager to learn and do some joint tickets together. This will create a harmonious team mixture while still allowing you to get business results from that team.

  4. Start Small – Grow Big – Any huge company culture shift is almost always doomed to fail. People tend to be resistant to change, especially when said change has yet to happen. The great thing about this is that you can start with a group as small as two people, a junior and a senior, and assign them 1-2 mentor tickets. See how they manage through those. You might find that the senior developer is still free to do his work while the junior is happy and excited to learn and grow.
  5. Either way, experiment –  Just please remember that during this experiment you must remove all burden of deadlines from the equation. You can reintroduce them as the mentor and mentee become more comfortable with these types of tickets. Also, you want them to get “wins” on the board to feel comfortable and confident. If the experiment goes well, bring more team members on to do mentorship tickets.

No Manager – The Senior Software Developer Predicament

The second issue that I’ve come across is when the senior software engineer and junior software engineer are on the same team as peers. While the junior might respect and admire the senior, and while the senior has every desire to teach and mentor the junior, there still might be a gap.

As a peer rather than manager, the senior cannot ask the junior to re-factor, re-do, or follow his guidance. All of this is based on the deadlines, desires, and wishes of the junior developer.

Understand your junior engineer better

The junior may sometimes ask the senior for help, but only in cases when he is completely blocked or unable to perform. While that might seem like a good way to do the mentoring, it’s not.

You don’t want your junior software engineer going wild and only asking for help when he becomes blocked. I faced this issue myself many times. In these cases, I would see bad code and would try to help a young developer, only to find that it’s easier just to take that bad code and re-write it myself.

Understand your senior engineer better

It’s all because people and companies align their goals toward delivering features and completing tasks rather than learning, creating good code, and avoiding technical debt. As a senior software engineer, I can say that when you’re someone’s direct manager, it’s simple to mentor. It’s much harder when you’re just their peer.

How to Fix It

First off, it’s important to get company support. Just as I outlined before, if dedicated software developer mentorship tickets and a proper framework existed, perhaps some of the focus would shift.

It can be the goal of a junior software engineer to learn and produce good goal, and that can align with the company goals. I cannot image how much companies pay later for bad code.

Company support is crucial

If the company sets up targets for junior developer to learn and produce better goals and allocate the time and resources to do so, the output of the team will be better. The company can follow the suggestion of trying out mentorship tickets, which focus on learning with outcomes. It can also set up any other framework they feel works for them.

I believe that focusing on aligning personal goals to company goals can help resolve this issue as well.

Developer Personality/Ego

Software developers come in all sorts of personalities.
It’s common to come across a junior developer that is very intelligent, bright, and promising.
It’s also very common to come across developers with 1-2 years of experience that have gained some tracking and feel very confident. While these are just examples, I’ve seen many types of developers from junior to mid closed off to feedback, especially in feature/velocity-focused companies.

Many times, I’ve come across various developers that just want to produce something. Their ability to accept feedback is limited, as well as their willingness to hear options.

This creates a problem as you have people producing hard-to-read, hard-to-maintain code. Since the organization is chasing features/velocity, no one stops to say that’s not how it’s supposed to run.

If the person noticing this is also a peer of the junior or a mid developer, he has limited authoritative power, aside from “telling the boss”.

How to Fix It

Fixing the organization and the approach to code would also address this issue well. You should empower your developers to think of code quality. Choose to dedicate time to improvement or choose mentorship tickets or any other route to improving the quality.

Helping everyone on the team work together towards quality and becoming better software engineers and developers will communicate to everyone that you also measure how you produce, not just what you produce.

Review PR to understand openness

Review the PR, see how open or not open people are to feedback, and have those that are resistant to change working with people they appreciate on the team. You can craft your culture to help people move from their mindsets.

It’s not an easy task. It requires out-of-the-box thinking. If after all you do a certain person still doesn’t react positively to feedback, I suggest re-thinking his place on your team. Negative and uncooperative people are toxic to your whole team. I’m not suggesting letting someone go, I’m just saying your team must be constantly striving to be better.

People must be open and have the right framework to learn or develop. Just like a business must grow and expand to survive, so must truly great engineers do the same.

Final Thoughts on Software Developer Mentorship

I tried to outline some issues we’ve seen in the workplace in multiple companies. Personally I enjoy mentoring.

Sense of satisfaction

There is a sense of satisfaction when you work with a junior or mid-level software engineer or developer and help him take complex code and make it simple.

A sense of mastery and accomplishment is highly important for people to feel good and be creative, as well as to keep a high developer retention. Good developers won’t leave a company that puts their personal development as part of their corporate strategy.

In future posts

In one of my future posts, I’ll talk in more depth about how to handle mentorship tickets or improvement time. I’ll try to better help senior engineers and managers to think in terms of how to get work done and mentor at the same time. We will discuss techniques to implement and ideas for how to approach software developer mentorship on the micro level.

Traveling Workstation. Nomad Ergonomics. The Ideal Setup.

Nomad ergonomics or traveling workstation comes to address how to setup a proper workstation on the road.
If you’re a software engineer, designer, sales person, entrepreneur or any other road warrior or digital nomad it means you’re on the road a lot and you need to work from multiple places.
This means slouching over a laptop in a coffee shop or hotel desk.
This creates stress on your whole body , from your neck and back to your wrists.
As such you need to think about nomad ergonomics.

The Proper Nomad Ergonomics / Traveling Workstation.

Understanding the recommended and correct seating and setup when working as a software developer is very important.
There are many good resources and videos that explain how to setup properly.
Here is an example of how to setup correctly:

ergonomics for software developer

If you wish to read more, here is a suggestion from Microsoft how to setup your office

However here are the basics:

  • monitor – eye level.
  • arm rest support at 90 degrees
  • adjustable height char.
  • get legs at 90-110 degrees.

All very easily done at home or an office, but what about when travelling?

The search for the travel workstation setup

The first item I miss the most from the home setup is not the overall nomad ergonomics but it’s the keyboard.
At home and in the office I use the Microsoft sculpt keyboard.
It’s a wonderful keyboard and once you get used to the natural layout of the keyboard it’s very comfortable and much softer on the wrists.
So I decided that when travelling I’d like the same setup.

Initially, I had a pretty decent sized backpack, so I tried to travel with a whole keyboard in my back-pack, however that didn’t fare well.
The keyboard stopped working shortly, or having keys pressed constantly, draining the battery from the laptop and keyboard.
At the end the keyboard itself failed, forcing me go back to nomad ergonomics drawing board on that one.

The Keyboard – GoldTouch Go!2

After some research I found the GoldTouch Go!2 Keyboard.

This keyboard can act as a regular keyboard:

simple travel keyboard

As an ergonomic keyboard with an unlimited number of configurations. Just pull the leaver and adjust to what’s most comfortable for you.

flat split travel keyboard
ergonomic travel keyboard

I found it super handy. It folds nice to half of it’s size, which means it can be stored in a backpack of almost any size.

folding travel keyboard

The Stand – The Roost

Once I’ve solved my ergonomic keyboard, I was still faced with an issue.
How can I position the small laptop monitor to be on eye level or close to it?
I’ve tried various stands, but those were either un-comfortable, or didn’t fit in a bag.
I even experimented with some cardboard boxes.

That’s until I came across the Roost Stand.
The Roost stand, folds into a stick size package making it ideal to travel, and when setup in props up your laptop at eye level.
Their version 2.0 also has multiple height adjustments:

Please note that the roost is very expensive, and I’ve had a few friends order from ali-express Roost like stands:
NextStand Ali Express

However I would still opt for the Roost, it was the original and I believe it’s better made. So I would prefer to put my laptop in the base stand possible.

Traveling Workstation – Last Tips

This along with any wireless mouse, let’s me setup shop anywhere with a pretty ergonomic setup.
Many cafes don’t have armrests, but I check with hotels prior to booking, and many do offer height adjustable seats with armrests.
Other than that Try to sit as close as possible to the table, and that also prevents slouching.

Overall, I’m very happy with this nomad ergonomics setup, it’s light and easy to use.
Only downside is that people might look at you funny, when you setup a workstation area in a cafe!
Hope you have found this useful!

Managing Tech Debt

Dealing with Tech Debt

In the previous post I outlined exactly what is technical debt, how it’s created and why it’s worse than financial debt
Today I’ll talk about about solutions – how make progress with managing tech debt?
How can we reduce our technical debt, and how do we do that without impacting the business.
Let’s dive into it.

Managing Tech-Debt – Identify and Prioritize

The first step in managing technical debt is to identify where the biggest issues are. While this might sound obvious, I’m not talking about where the worst code lives.
You should find the most impactful issues to the business. Many times technical people look from tech perspective and not from a business perspective.
This means, you need to ask questions, ideally a business stake-holder the product owner, or yourself if you know a lot about the business.

  • What is slowing us down the most today?
  • Where is our biggest bottleneck today? Where is our biggest slowdowns
  • How can we impact business the most? Sales? Improving Customer Service?
  • What about your interactions with Tech & IT frustrate you the most?

As such we should identify what system would make the most significant impact.
Try to find review all the current Tech issues you have an rank them according to priority.
Make a list of all issues you have while thinking about the business impact first.

Tech Debt Issue Lists

When you’re writing out the list, try to be as detailed as possible. This means , change architecture, replace infrastructure, are not detailed enough.
You want to strive to create a detailed list, that can be easily converted into work items / Jira items / etc:
i.e. something like re-factor / re-write order processing module. services.py is 3000 lines long, should be broken down and re-writen. many functions such as calculate price, has too many flows and is over-complex. Remove duplicate code and cleanup

is much better.

Cost Evaluation

As you’re writing out your debt list, either while or after, review it and find a way to attach a cost to each item.
You can use any scale you want, such as Jira points, days of work, dollar cost, etc.
However it’s important as that is the second part of the tech prioritization.

After you’ve complied the list, you should have a list of business impact first, cost second analysis.
This is your tech “re-payment” analysis, and at this stage we are ready to tackle the tech debt head on.

Automated testing – respecting the contract

I’m not a big fan of throwing out code as it’s badly written.
Don’t get me wrong sometimes it does make sense to chuck a whole piece of code away.
However in most cases code can be refactored into something that’s nice, cohesive and maintainable.

Does not matter if you’re a python developer using unittest, or a Node.js developer using Mocha Chai, or whatever other framework / language you use.
What does matter is that before you start re-factoring code, you want to setup unit test / and perhaps also integration and end-2-end tests.
Maybe your code base already has that, however most “tech debt” code doesn’t.
We will discuss unit testing strategies in another post.
You can read about unit test in Node here.

Re-factor, Re-factor

At this stage, you’re ready to begin the clean-up. You need to re-factor your code. Ideally by someone very experienced.
This is not for the faint of heart, you’ll be doing lots of activities and changes, and potentially introducing issue regression.
However it’s a cost you must pay. Here are some tips to focus on:

  • DRY – Find code duplication, and merge into functions.
  • Review complex long functions, break into small ones, strive to make code read like english
  • Split complex function with switch likes params into several stand-alone functions
  • When needing complex function, make complex function private, expose simple interface functions that will call the complex one
  • If you can’t understand what a piece of code does, try to break it down into part of re-write
  • Make sure function names make sense, if not change the function names, preserve old names if needed during transition with proxy functions
  • Make sure code is now easy to understand. Imagine you’re seeing this code for the first time, does it make sense
  • Manage complexity, by breaking down into small bite-like chunks, either new classes, or functions


We hope this helps in setting up your strategy for managing technical debt. We will be doing a follow up post focused only on re-factoring.
In the meanwhile, if you need any assistance or help with managing your technical debt, feel free to reach out to us.
Thanks and until the next time!
The CoreTeam.io crew.

What is Technical Debt And Why It’s Worse than Debt

Does not matter if you’re a Node.js developer, a Python engineer, a React / Front-end wizard, a product manager or the CEO. Good chances you’ve heard of technical debt. And if you have not, or you have lack any understanding of technical debt I’ll grantee you’ve been on the receiving end of it and have not realized.

In this post I’ll try to outline and get you understanding technical debt. What is it and why it’s actually worse than debt. In our next post we’ll deal with how to manage or deal with it.

What is Technical Debt?

A quick way to understand technical debt is to look it up.
According to Wikipedia, the definition is:

 a concept in software development that reflects the implied cost of additional rework caused by choosing an easy (limited) solution now instead of using a better approach that would take longer.

However Technical Debt, in my view is much more broad. It’s any code that has issues such but not limited to:

  • Is not easy to maintain
  • Hard to read
  • Has code duplications
  • Lacks testing
  • Not well engineered
  • Has very long functions doing many different things
  • Is not modular, and compartmentalized, as functional or OO
  • Many others…

Types of Technical Debt

While it’s possible to categorize many types of technical debt I’d like to bucket them into two types of technical debt you most likely have in your systems today.


The Wikipedia term identified that type of debt. It’s when we logically make a decision to under engineer or hack a solution together, rather than build software that is well crafted, designed and easy to maintain.
There are many times reason to do that (i.e. experimenting with a new feature, temporary solutions, proof of concept, deadlines, and more).


Wikipedia does dive deeper, and shows a quadrant of types, which is nice, and bit more helpful. But I believe the underlining assumption that Technical Debt is a choice is flawed.
Much of Technical Debt in the best case scenario, is a result of knowing better after, as identified in the inadvertent part. As we always know better in retrospect
(i.e. we have a much better understanding of how a piece of code should be used after we have written it and sent it out to the wild, a bit like the “lead startup”)

However I think that again makes a graceful assumption. a lot of “accidental” debt is generated, by bad software developers (experienced as they might be),
by lack of good software design practices such as rushed code reviews for example, not following DRY principle, inventing rather than using existing design patterns,
junior developers writing code that “works” but never properly reviewed, multiple contractors working on the code, technical decisions made by managers that lack technical knowledge nor listen to their technical team,
and many more reasons.

Why it’s worse than debt?

Warn Cunningham coined the term Technical Debt back in 1992. The term draws similarity to financial debt.
In that regard, you’re “borrowing” time by implementing quick/bad code today, which would need to be repaid in future re-work.
While this all sounds lovely and novel, it’s fair from reality.
In fact when you have a better Understanding of technical debt you’ll realize that it’s considerably different and worse that financial debt.
Financial debt is very well defined, technical debt is not:

Technical Debt Financial Debt
How accumulated Intentional / Accidental Only Intentional
Is Avoidable No Yes, you can live without debt
Amount “Borrowed” Not Clear Exact, documented in contract
Repayment Schedule Not clear, No defined, not easily quantifiable Exact, documented in contract
Typical Interest Rates Not defined. Can be 10x or more! Well defined in contract, normally 5%-10%
Non-Payment consequences No one knows. Can be huge / loss of business! Well defined in contract

This table is not complete, while it’s not hard to think of many more ways tech debt is different to financial debt. It’s not the point.

Is it avoidable?

What you have to remember, is that Technical debt is unavoidable and exists in all companies in the world! Let me repeat that once more.
You cannot avoid Technical Debt. You just can’t even if you have the best software developers in the world, just by the mere fact that you will know how to do things better, once you’ve done it.
As such accept you’ll accumulate Technical Debt, that’s money back guarantee!

eBay Case Study

Let’s look back at eBay in early 2000’s. They were world leaders, in a perfect position to be the online market to buy and sell everything online.
And today? They are still in business, and work well, but they have lost all their edge. People use Amazon a lot more, and many other website instead of them.
So what happened? Technical debt did. Their systems were complex, rigid and filled with issues.
Here an excerpt from a WSJ article:

eBay’s system, which involved 25 million lines of inflexible code, soon became a liability. The company, for example, couldn’t figure out which of its hundreds of thousands of ‘iPod’ listings were for a given model or for iPod accessories. EBay’s challenges with outdated technology are common for Web pioneers, whose systems were built with custom software that is now too old and rigid to adapt to a competitive and fast-moving market.

So in reality there system prevented them from moving quickly, by the time they had to “re-write many of their systems” ending in them losing their competative edge.

So What should I do

Most chances are your systems are in a much worse state than eBay’s. Most chances are you have lots of Technical Debt that needs to be paid.
Does it mean that all is gone, and you’re doomed? No, there are many paths forward. One this is sure you have to start thinking about technical debt and how to address it today.
I believe at this stage you have a much better understanding of technical debt, which is a great start.
In the next part, we will discuss how to tackle technical debt and how to prevent your company from hitting the same walls that eBay did.

Until next time.

DeployBot – Simplified DevOps – A Kubernetes SlackBot

Why production auto-deployments is a bad idea?

Most companies DevOps have setup multiple environments: dev, test ,stage, production. And if not you should be doing that today!
These enable you to reduce risk and ensure software engineering ships quality code.
You might have even hired a DevOps developer which has setup a CI/CD pipeline, helped Dockerize your app and run them in a Kubernetes Cluster.
This is a very common setup these days, it ensures developers can easily make code change, which case then be ready to test on the cloud in a matter of minutes.
This automation is blessed for most environments, when a developer wants to test a new feature, when someone needs to qa or approve the feature, etc.

DevOps – DeployBot

I don’t like the idea of auto-deploy to production. Many times we want to control and decide how and when we deploy to production.
Maybe we want to group together a bunch of features, maybe we want to deploy new features only on Monday, etc.
Enter DeployBot. This Slack ChatBot can deploy your application by sending a message on a secure Slack channel. As simple as that.

DeployBot Slack Configuration

The first step, is to setup the SlackBot application and channel for using SlackBot.
Also please note, this configuration is correct as per March, 2020, so future changes to the api might be needed:

  • First go to you Slack app page https://api.slack.com/apps?new_classic_app=1 and sign in.
  • Make sure you create a Class App as in the link I have attached above.Give it your DeployBot Name (something like DeployBot) and choose the workspace you would like to run in.
  • One the App Configuration Screen, from the Features and Functionality select the bots.<;li>
  • Add a Legacy Bot and give it a name and select a user. Something like DeployBot and Deploy_bot for example.
  • Click on oAuth and Permission, and scroll down to “scopes”
  • Use the add oAuth Scope and add the following permissions: app_mentions:read, chat:write, im:write, incoming-webhook
  • After that click install into workspace (Auth and Permissions) and install your Slack Bot App
  • Take a note of the oAuth token and Slack bot Token (we will need them later)
  • Then go to your slack client, and click on apps (bottom left hand side list of contacts). you should see your slackbot there
  • Click on it and hover over the SlackBot name, make a note of the url, the last part is your slackbot ID.
    https://hexanow.slack.com/team/U012GGEU0HJ would mean that U012GGEU0HJ is your slackbot user id!

Now we are ready to configure the bot and install it into the workspace.

Bot Configuration and Installation

Now we need to setup the Deploybot installation in your organization.
This might require some simple DevOps skills, and perhaps some very basic Python skills.

  • Clone the project:

    git clone [email protected]:DoryZi/SlackKubeDeployBot.git

  • Edit your setup.yaml

    This file contains all your definitions. First we need to update the secrets yaml.
    In this section please update the AWS SECRET, you Slackbot oauth token from the Slackbot installation, and the Kuberentes Cluster Token and Token2.
    The KUBE TOKEN are not mandatory, the kubernetes setup will try to use in Cluster Authorization. If that fails, or if you Slackbot is running in a different cluster.
    You can use the AUTH TOKENS. The first one is the default Auth token for all your cluster, the second one, is used if you need to override for a particular app.
    For example if you have 2 different clusters you wish to run deploy bot on. You can also easily extend this setup to support multiple clusters. If you need to hire a python developer to help you setup this up, feel free to reach out to us.
    You need to base64 encode you secrets:
    echo "" | base64 -w 0

    More info on kuberenetes secrets

  • Configure environment variables and App Information

    So this step involves setting up your app configuration. This is contained in the ConfigMap part of the setup.yaml
    The env variables, bellow are pretty simple and strait forward, I will try to explain how the app config works.
    For each up app you wish Deploybot to recognize you must add an APP CONFIG entry.
    Each entry consists of:

    • “app-name” – name of your app, this will be the base when it looks for ECR entries for new builds too
    • “deployment” – the name of your kuberenetes deployment running this application.
    • “container-name” – the name of your kuberentes deplyoment container name
    • “cluster-token” – optional – a cluster token if this deployment runs in a different cluster to the default one. This is taken from KUBE_TOKEN2.
    • “cluster-endpoint” – optional a second cluster endpoint, if this deployment runs in a different cluster to the default one, This is tkane from KUBE_ENDPOINT2

  • Apply your configuration

    : kubectl apply -f setup.yaml

After all of this, your slackbot is ready to be used.
Feel free to create a secure channel, or directly message to your chat bot.
It should be able to run a few commands such as:
You can check for a new image in the registry (compared to what your deployment is running), check your running image, or deploy the latest image to the live.

Extending – a DevOps engineer or Python Developer task

This app uses the kubernetes API and aws API. It can easily be extended to run on GCP, however this would require you to have Python skills and DevOps skills.
We hope this helps, and that you find DeployBot useful.

Software Engineer Screening

Finding good developers and software engineers is always hard. One of the pain points I find is the interview process. My main grief is that the correlation between those that seem amazing during the interview / screening process but turn out to be duds, against those that seemed average turn out to be stars is very weak. It’s just so hard to find who is the right candidates to hire.

As such I’d like to outline some points that can help in making the right decision and reduce the margin of error. Important to mention, no matter how good your interview / screening process is. When you hire a full-stack or any type software engineer, you will not know for 100%. The only way to know if someone is a good fit is to work with him.
As such if you can take someone on for one week, as contractor to try him out that is the best option!

In this post today I’ll talk about the key points and methods that can help you mitigate some of the risk when you hire a software developer.

Structured Interview and Process

One of the key mistakes that people make when hiring a full-stack developer or any software engineer for that case is not having a repeatable and structured process. I cannot emphasis this enough, having a repeatable, consistent, well-thought hiring process. The means you need to

  • Think of your phone screen questions and prepare them.
  • You need to outline what technical questions you’ll ask in your coding or tech assessment and why.
  • Prepare what general HR questions you’ll ask the candidate.
  • You’ll need to figure out how many people would need to talk to the person.
  • Be able to communicate the process to the candidate early on.

These points are very important. Without them you’re really taking a guess based on feeling, emotions, and you have no way to measure all candidates fairly. Here is an excellent article from the New york times on the topic

Technical test should simulate work

The technical test / screen helps is as a quick filter to help find people that can potentially have the skills to work as a successful engineer in your company. It’s only potentially as most of these tests are not a reflection of what real work would be like. As such I would urge you to try and construct your tech test with that in mind. Try you best to simulate work and test relevant skills. Also when you’re hiring a full-stack developer you’re looking for people that have back-end and front-end experience so keep that in mind too. Here are a few ideas of what your tests / screens can include:

  • Code a simple algo problem – choose something simple, not too complex and let the person run through it. See that the person can code. So many can’t code despite their resumes looking mighty impressive!

  • Try to go through a bug issues and problem tracing – setup either a theoretical or practical (meaning a bug they have to figure) and see how the candidate solves the issues or tries to at least.

  • Test some front-end abilities – Code, HTML and CSS, something simple. You can even setup a react project and ask the candidate to go through it, and try to make changes, add layouts etc.

  • Ask about his work history and focus on the technical aspects. Then try to drill down and see how much he really understands. Hiring a Node.js developer or React.js Developer, or any JavaScript developer? ask him about prototypical inheritance, what it is and how does it work. If you are about to hire a python developer, then ask him what algorithm does .sort() run and how it works, etc.

  • Ask some architecture questions, and see how he things. Do things like memory limitations, time complexity limitations, how does he handle large scale throughput? see how that person thinks and how he approaches different situations.

This should give you a clearer indication of someone’s abilities, either way use a consistent and repeatable process. Now it’s not always critical that he solves everything, it’s important he understand, writes clean code and understands what he is doing. After all at work he will have a lot more time to work on problems than in a 60 minute test for example.

Focus on his abilities to deliver business results

Good engineers are able to bang out code, that works. However if you are able to hire a full-stack developer that is great, he will also understand the business goal behind what he is developing. A great software engineer will be able to not only require minimal supervision, but he will also be able to build software with business needs in mind. He can enhance or add additional benefits to the software as he suspect will be needed. Imagine you’re building a shopping app, and you have some quick search that let’s you scan all the products in your shop. You’ve asked you developer to add few more product categories to the drop-down. A good developer would just add those. A great one, might notice that this list is growing and growing, so maybe we should think about pagination. Or perhaps would suggest limiting the number of results back. Or perhaps adding de-bounce (sending the search only when you finish typing), etc. He would improve on said feature. Maybe he will just create additional todos and discuss with you. Either way he would raise those points. You can try to structure questions to see how a person might help in this way!

This should give you some important points to note when you’re looking to hire a full-stack developer, or any other type of senior software engineer. I hope this has helped.

Till the next time!

Software Engineers Interviews Mistakes – Why “Homework” task are silly

Recruitment is hard and Interviews are complex

Recruitment is a complex and difficult matter, just like Interviews are far from perfect and are very exhausting both for companies and candidates alike.

Engineers looking for new opportunities have a great dislike for the process. It includes lots of calls, interviews, tests, and more. It’s a common feeling for many that they just took on a second job: looking for work.

Companies, on the other hand, don’t have an easier time with it. They have to sift through loads of resumes, read an insane amount of emails, and answer tons of calls. All in order to decide who they will actually interview in person. There are so many candidates that sound and look the part while many are barely qualified to make coffee in real life.

Common mistakes in interviews

I’ve been involved in hundreds—if not thousands—of these processes and worn multiple hats while doing so, and I’d like to make a few observations and important notes to candidates and companies alike.

First you have to remember: Great interviews don’t mean great hires! Both sides have to remember this, as it’s a critical point! There are many things that you will not know:

  • Will that person be hard-working?
  • Will they not give up when confronted with hard tasks?
  • Will they be able to find creative solutions?
  • Will they be a good coder or not?

There are many other things you won’t know; you’ll only know if the person has potential and how well they do at interviews!

When we interview as candidates and as companies we get very excited about certain opportunities (great cultural fit, amazing performance on interview tasks, everyone seems so nice, the unexpected feeling of a strong work connection, etc.) No matter how logical, measured, or obscure your personal reasoning is about that candidate or company, you won’t really know what it means to work together until you actually work together

None of the big guys do it

Google , Facebook, Twitter and many of the big guys, could have easily sent homework task to all candidates, but they don’t! They spend a day or more with a candidate, they run through code together, and they get a sense for what that person is like. So why are you trying to re-invent the wheel? You’re not going to write you own front-end framework, you’ll use React or Angular, so why not also recruit as they do?

Why homework tasks are silly and what should we do?

Way too many companies send people take-home tasks or, better yet, some silly HackerRank that tests people on solving a problem in a very time-limited manner.

I’m not sure who in The Valley started this and made everyone follow this detached from reality pr.

You’re well-funded? That’s no reason to assign a technical a homework task. Feel free to offer it, as some candidates like it, but your best bet is to spend time with a person, solve a problem, code together, etc.

Since we agreed good interviews != good hires, then why not do your best to simulate the environment of solving a real task at work? Isn’t that what you’d want that person to do anyhow?
Run through some code problem together and get a sense for what it is like to work together.
You’ll see how a person thinks, how he/she tackles hard problems, and gain much more insight than you would from a random test or take-home task.

What is the logic behind sending some obscure test or asking someone to build a software for you for free? Are you trying to miss out on good candidates? Should someone that is busy spend half a day, a day, or even more writing free code to prove that he/she is worthy of employment? Maybe that’s okay for recent graduates, but what about for people with 5–10 years’ experience or more? What profession in the world does that?

I’m a big believer in fairness, and if you ask someone to invest time then be willing to invest the same time yourself as well. While it will be more time-consuming, you will both have the chance to work through a task together and you’ll get a good sense for working with each other.

When homework tasks make sense and how to give them?

Personally, I say only if you’re willing to pay that person for their time and show that you value their time. Say you’re a starving startup—pick a small task, offer it as a stand-alone project, and assuming that the code is good, the candidate would sign over the rights and you might even use it. Then pay them for their time except if, of course, the code is bad and they do not pass.

In the next part I’ll talk about more interviewing tips and suggestions. Stay tuned!

Cost Effective Docker Jobs on Google Cloud

Recently, I wanted to run some jobs using docker images. I’m a huge advocate of using Docker, so naturally I was going to build a Docker image, run my Python scripts, and then schedule said job to run on a configurable schedule basis. Doing so on AWS is pretty easy by using lamda and step functions; however, since this wasn’t a paid gig and I wasn’t able to get someone to fork the bill, enter Google Cloud!

Google Cloud Platform (GCP) is, in a way, the new kid on the block. AWS has a long history with the cloud platform and excellent customer support, whereas Google’s customer service is a bit like Bigfoot: you’ve heard of it, some people say they’ve seen it, but it doesn’t really exist. However, Google is still an amazing tech company: they release early and they improve their products to make them awesome(e.g. Android). And best of all, they offer 300 free credits. So I decided to go for Google, how bad could it be?
In this post, I’ll talk about how I set up the Google Cloud to work for me. It took blood, sweat, and tears, but I got it working. I scheduled a job occasionally: I spin up a cluster of instances, run the job, and shut it down! Not only is that cool (ya, I’m a geek), it’s also quite cost-effective.

I will outline what I did, and even share my code with you:.
Here goes:

Step 1 – Build docker image and push to Google Cloud private registry

The first step was the easiest and most trivial. It is pretty much the same as AWS.

Create a build docker image

Let’s start with creating a build image. GitLab CI allows you to use your own image as your build machine. If you’re using a different CI, I leave it to you to adjust this for your own system.

This a Dockerfile for the build machine. It uses a docker machine, pulls pip, and installs gcloud.

Then I push this build image to docker-hub. If you haven’t done this before you need to:
1) Singup to docker cloud https://hub.docker.com and remember your username.

2) In the build machine folder, run docker build . -t /build-machine
3) Run:

Create a GCP service account

You have to create a service account, give it access to the registry, then export the key file as JSON. This is very simple step. If you’re unsure how to do it, just click through the IAM / Admin – you need to create a user, give it an IAM and export the key.

Customize CI Script to push to private registery

Once this is all done and you have your build machine, we can work on your CI script. I will show you how to do this on GitLab CI, but you can adapt this for your own environment. First create a build environment variable called CLOUDSDK_JSON and paste the contents of the JSON key you created in the previous step as the value of that key. Then add the following: .gitlab-ci.yaml file to your project.

Adjust the job-image-name to your job docker image name, service_account_name to the service account name you created and the build image to the image you pushed to docker hub. This YAML file is directed as a python job, but you can change it to any other language.
I have 3 stages: build, test, and deploy.
I build and test on all branches, but only deploy on master. GitLab CI has an issue, each step can happen on a different machine, so my first build step isn’t kept to the deploy phase, which forced me to re-build in the deploy phase.

Once this is done, you CI system should be pushing your image to your Google private registry, well done!

Step 2 – Running Jobs in a Tеmp cluster

Here comes the tricky part. Since jobs only need to run every x time, and only for a limited period, it’s ideal to be run as a Google function. However those are limited to one hour, and can only be written in JavaScript (AWS support multiple languages with lamda and with state machines). Since I didn’t want to pay for full-time cluster time running, I had to develop my own way to run jobs.

Kubernetes Services

Controlling jobs in a cluster and cluster control can be achieved using Kubernetes. This is one part of GCP that really shines: it let’s you define services, jobs, pods (a collection of containers), and then run them.

To do this, I wrote a Kubernetes Service class in Python that will:
– Spin up / create a cluster.
– Launch docker containers on the cluster.
– Once jobs finish, shut down the cluster.

This is the class and constructor. The full code for this class has more configuration and env variables, as is part of the App Engine Cron project. I will include repo if you want full details on how to achieve this.

This function can control the cluster size. It can spin it up before jobs need to be run, then shut it down afterwards:

kubernetes_job function creates containers (an additional function that creates container objects with env variables. Containers are then part of a pod, and that pod is part of a job template that is part of a job spec. You can read more about it in the Kubernetes docs.

If you don’t want to code to continue to wait for the jobs, you can poll for completion, and that is what shutdown_cluster_on_jobs_complete is for. It will shutdown the cluster once there are no running jobs.

This class controls the entire job scheduling and ensures their execution is successful.
It’s part of an appengine (however, they can be used independently).
Next we need to have this script scheduled or triggered to activate.
And that is our cron scheduler task.

Cron scheduler appengine service

Sadly, Google doesn’t give you an easy way to run code in the Cloud; you actually have to write more code to run code (silly, right?)

The concept is that the appengine provides you with a cron web scheduler that calls you own apps endpoints in given intervals.

First, you add cron.yaml to your project and you configure which endpoint and time interval to hit that endpoint:

Then we can add a handler to shut down the jobs, and to kick them off.

Last we want to add a Setting class to load env like variables from the datastore:

Note that most of the app depends on the datastore. Sadly, Google doesn’t allow you to have env variables easily, but you can set up env variables in the datastore.
For this I added a class called Settings.

Then we just add bind the route handler:

This should allow our app to spin up a cluster, launch containers, and then shut down the cluster. In my code, I also added a handler for the shutdown.

Then make sure you have gcloud installed (here is how and just deploy the appengine using the gcloud deploy command and you should be good to go ( here is how
While my example runs the same docker image, and just has different operation with different env variables, you can easily adjust this code to suit whatever need you might have.
Here is the full git repo: gcp-optimized-jobs
Hope you find it useful!

testing javascript code with chai, mocha,sinon on circleci

JS Testing Survival (mocha, chai, sinon)

This post is a simple guide to JS testing with Mocha, Chai, Sinon on CircleCI. It will show you how to setup for testing, some great tips for good coverage and more.
I’ll cover some best practices I use for testing JS code. It’s not official best practices, but I use these concepts as I found they make it easier to get easy to read test with full converge and a very flexible setup.

This post will dictate a unit test file to see the different points I found helpful when composing unit test files:


mocha is a testing framework for js, that allows you to use any assertion library you’d like, it goes very commonly with Chai. Chai is an assertion library that works with mocha. chai You can read there about how mocha and chai work, how to use it and more.
One of chai’s strong points is that you can easily extend it using support libraries and plugins. We will use a few of them, so let’s first setup our dependencies in our project:

We are installing a few liberaries:

  • mocha – js testing framework.
  • chai – the chai library, has a good reference for how to use chai to assert or expect values, and a plugin directory – This is a valuable resource!
  • chai-httpchai-http – This is a chai extension that allows us to hit http endpoints during a test.
  • chai-as-promised – mocha support tests / setup that return a promise. This enables us to assert / expect what the result of the promise would be. We will see this in action shortly.
  • co-mocha – a mocha extension that allows us to use generator functions inside mocha setup / mocha test. If you do not do this step, and try to use a generator function, the test will finish and will not run yield correctly in test code. This means you will have twilight zone like results, of tests passing when they should fail!
  • sinonjs – cool test mocks, spies and stubs for any JS framework. Works really well, and very extensive

After we install all the packages, let’s create a new file, and add all the required libraries to it as follows:

In this example I’m testing an express server, but you can use any type of node http server (assuming you are testing a server). Just make sure you export the server from you main or server file, and then you can require it from your test files.

Grouping tests using ‘describe’

Mocha does a great job at grouping tests. To group tests together, under a subject use the following statement:

‘describes’ are also easily nest-able, which is great. So the following will also work:

This groups them together, and if you’re using something like intelliJ or webstorm then the output is displayed in a collapsible window very nicely:

Test hooks

When running tests many times we need to do setup before each test, before each test suite. The way to do that is to use the testing hooks before, after, beforeEach and afterEach:

Also these hooks can return a promise, the test framework will not continue until the promise is resolved, or will fail it is rejected:

Also since we have require co-mocha, our hooks can also run a generator function:

I can then use the stuffINeedInTest in my test files. You can also do this setup using promises as shown above.

Hook on root level

Test hooks are awesome, but sometimes we might want some hooks to run not only once for a test file, but once for all our tests. mocha does expose root level hooks, so in order to achieve that we will create a new hooks file: root-level-hooks.js
and put our hooks in there with no describe block around it:

Then at the top of each test file we will require this file in:

This way our hooks will run once for all test runs. This is the perfect place to load up a test db, run some root level setup, authenticate to system etc.

External System Mocking

Some systems / modules call other systems internally . For example think of a functions that processes a payment for an order. That function might need to call a payment gateway, or after the order is processed, send the shipping information to a another system (for example a logistics system or upload a file to s3). Unit test are intended to be very stand alone, and not depend on external systems. Therefore we need a way to mock those external systems, so when the tested code reaches out to these systems ,the test case can respond on its behalf.

In our test we will use sinon.
Basically we will mock the calls using a test class or mocked calls, that reads a response file and send it’s back.
This makes the mock strait forward:

What we are doing here is creating a mock object, in this case we are mocking the axios, as my server code uses it, but we can use the same construct to mock any external system.
Our request mock will provide a get and a post methods, just like the axios library does. I’m using the sinon.spy to check what URL is requested by the module code, and a switch statement to handle the different urls requested by the module. Our mock can return urls, json, promises, files, or whatever is needed to successfully mock the external system.

I’m then using the before hook to register the mock as axios mock, so when the module called require(‘axios’) it will receive my mock and not the node_module that actually does the http request.

Then I’m using the after hook, to disable the mock and return to normal.

Test Cases

Mocha let’s us create tests very easily. You use the ‘it’ keyword to create a test.

Or using generators

You can also use the done callback, but I prefer not to use it.
I like to keep code a small as possible, and without any distractions.
However it’s here if you need it


Each test case is composed out of two parts:
1) The test itself
2) Expected result

Test themselves

Since we have added the mock for external system we can safely use our test code to hit a function, or if we are testing a rest endpoint we can call that endpoint:

In this example we are testing an endpoint, but calling a function would have been even easier.

Expected Result

The second part is includes looking at the results of our test runs and we will be using chai to look at the responses. chai provides a long list of ways to look at responses either using expect, should or assert, whichever you prefer.
I try to use expect often as it doesn’t change the Object.prototype. Here is a discussion on the differences expect vs should vs assert

Failing these will trigger the test to fail.
I normally use a test helper class with a few standard ways to test for correct response and to compare return object to the expected object:

Test for failures

Using promises, I can also quickly test for failures to ensure our code doesn’t only work properly for valid input, but it should also work for invalid input.

I can test to see that code will fail with bad input:

TestUtils class

TestUtils is a utility class that I created with some expected results that allows to easily test for missing fields, to iterate the body for all the fields I expect or for a simple 200 and body.

I then require the TestUtil class in my test file, and then I can use the test utils for quickly expecting or asserting different conditions.

Mocha tests on circle

When using CircleCI, it’s great to get the output of the test into the $CIRCLE_TEST_REPORTS folder, as then circle will read the output, and present you with the results of the test, rather than you looking through the logs each time to figure out what went right and what went wrong. Circle guys have written a whole document about that, and you can see it CircleCi Test Artifacts.

In our discussion we will focus on using mocha and getting the reports parsed. In order to do so, we need mocha to output the result in junit xml format. This can be achieved easily using the mocha-junit-reporter. This lib will allow mocha to run our test and outpu the results in the correct format.

So the first step is to run

And to add in package json output in junit format:

This output the information in the junit folder for both eslint (if you are using it) and for mocha.

Now all that is needed is to create a link between your junit folder and the CIRCLE_TEST_REPORTS, which can be done by editing the circle.yml file and adding the following line in the pre step for test.

If you aren’t using docker, you can also add a symbolic link after the creation of the folder - ln -s $CIRCLE_TEST_REPORTS/junit ~/yourProjectRoot/junit

However if you are using docker-compose, or docker run to execute your test inside a will also need to add a volume that maps you test output to the CRICLE_TEST_REPORTS.
For docker compose:

for docker run you can do the same with using the -V command.
Once that is done, you’ll get the report output in circle after the build finishes.

Good luck!