What is origin/master, or how to update to latest changes in git

When using git, there are few concepts that are hard to grasp at first, but are (in my opinion) required to get fluent with it. It does not matter if you are using GitHub, BitBucket, Gitlab – figuring out how it works (at more than basic commit/push/pull level) will be very useful when you will try to set up your workflow or when you find yourself in a corner.

One of those concepts I think is – what is origin/master? How does it differ from master?

Collegue of mine had a problem where he could not reproduce issue with code. We thought it may have been related to latest changes on master branch, that were not included into his branch. But he insisted that he in fact did merged master branch changes. Spoiler: he did not.

What my collegue was doing was: git merge origin/master. Now that’s perfectly fine, I said, but you are not getting latest changes that were made by others in our team. “But I am calling origin/master, so I am getting latest, freshest code, right?”. Nope, you are not. origin/master is not latest code. It is latest code since the last time you contacted origin server.

Here’s how it goes. You contact server sometime, usually when you are doing operations like pull, push, fetch. Then your local copy of repository gets latest code changes, updates your branch pointers etc. But – not all pointers. Just those that are reflection of server state, like origin/master, origin/nasty_bug_solution, origin/new_fancy_feature etc. But your nasty_bug_solution branch will stay unchanged – unless you will merge changes (or you are doing pull which merges changes automatically).

So now you have your latest version of code, you are at the same place where origin server is. But that’s not forever. In a few minutes your collegue may add something to master branch and whoops! you’re out of date. But that’s OK, that’s cool. That’s what makes git – git. You can work offline with your branches, not requiring asking server over and over again what’s going on there. Just keep in mind that when you are doing git merge origin/master – you are checking and merging your local version of branch, last state you have seen on server, not what is there now.

In graphics above you can see how it flows – unless you do git fetch (or git pull which fetches underneath) – you won’t get server status update. You can merge all you want – code will still be in the state it was.

So if you want to update your branch with latest-freshest master copy – do git pull and git merge origin/master afterward. That’s the way to success!

Or, if you are feeling frisky, you can try git pull origin matser – pull changes from server origin, branch master on that server, and then merge those changes into my current branch. But that’s only for the bravest! Will you dare?

Advertisements

Custom TypeScript typings not recognized

For a while we got problem in our project where TypeScript typings (*.d.ts files) were not recognized. This resulted in many failed builds. Solution was rather straightforward – reference those files directly, for example by code like this:

/// <reference path="shared/my_custom_typings.d.ts" />

While this works – it is far from perfect – those typings should be automatically recognized.

Turns out when bumping typescript version at some stage they did introduce a change. While in older version directory structure like this was OK, in newer requirements changed – and so our directory structure had to follow.

We had to move our d.ts files into dedicated directories, one for each typing, following structure something like:

+ shared
    + my_custom_typings
        - index.d.ts
    + my_other_library_typings
        - index.d.ts

And with that we could get rid of all reference directives as they got loaded automatically (of course there still has to be some kind of indication where to look for those common, shared typings – set it up in tsconfig.json!)

Inclusive job interview or fighting for your job

For past year I have been working (partially) as technical recruiter. This lead to some nice and some weird experiences, but mostly has been very interesting – sitting on the other side of the table. While the process of recruitment is not perfect, we are trying to make it more open, inclusive and make sure people are going out with something: either an offer, or at least info about what was missing in them, where they are not lacking – but have some holes in skillset that is required for the position. Some skills, that people do not have at the moment, can be quickly learned. Others are more complex and people just didn’t make it – that’s how life goes sometimes.

Recently we’ve had discussion with fellow technical recruiters. We’ve touched few topics, all opinions were very interesting (mostly because quite different from mine). But one opinion struck me. Colleague said that he is very dissatisfied with job interviews that are nice and easy. What he said is that getting offer after nice interview, where everyone was not only respective but simply goes out of the way to make you feel included, and where there are no trick questions – just does not feel satisfying. That this is too easy. He expects that job has to be obtained through hard talk, complex questions, maybe trap question here and there. He has to sweat to get through it.

This feels totally weird to me. I understand feeling stressed out on job interview, it is hard to avoid. Someone is judging you, your skills, you are sitting in foreign space, unknown people. Hard to not sweat a little bit (use antiperspirant!). I however love it when people are open, happy to talk to me, are going out of their way to make questions clear and well defined. Do not try to trick me and prove me how unworthy I am of taking software engineering position at the company they are recruiting me to. Bleeding through interview is simply wrong to me. Why would I want to work with people that are trying to squeeze me out of my bucks (that’s the whole point of job interviews with tricks etc. – they still want you if you fail to answer some of those, but this will go against you when going to money part of interview).

If you are interviewer – I urge you to be nicer, more inclusive of people. Lack of skills, holes in knowledge – we’ve all been there, none of us have been born with our current skill set. It is OK not to hire someone, but it is not OK to make them feel wrong about it.

We are not hiring coding monkeys. We are not hiring spec-ops soldiers that have to work under top stress conditions. We are not hiring robots.

We are hiring humans.

Idempotent changes

I finished chasing a bug in code few days ago that took me a while to understand. What happened is – system failed because it expected to have one value matching search criteria, but it found multiple. Two, three, up to four in some cases. That should not be possible, violates business logic. It should violate SQL Server constraints as well, but for one reason or another those were not in place.

The source of a problem was the fact that, when importing data from external source, system tried to add default value to some list. In case of failure, it did retry few times. Each time it added default value. In our case, there were three retries due to system setup, so we ended up with four default values in a list.

Why did that happened? Multiple reasons. Retries called code to setup defaults etc. at every retry. The code that inserted those defaults were not made aware that such thing could happen so it assumed default has to be added.

It is, first of all, lack of communication and clear design from the team as a whole. Should defaults be clearly marked as being called every time – team would have avoided this mistake. But this is hard.

One big thing that everyone on a team could do is – whenever possible, write code to be idempotent. Do not introduce side effects when not necessary. If changing value triggers event in system – check if value was not already what you are trying to change it into. If you are adding something to the list – check if it’s not already there. If you need to make call to external service – check if value is not already downloaded.

Not only it may save operation time in some cases, it may help avoiding bugs if someone unaware tries to use method incorrect way.

asp.net core launch profile and launch settings

ASP.NET Core –launch-profile and launchSettings.json

How to start application in ASP.NET Core from command line?

dotnet run

How to start application in ASP.NET Core in development mode?

set ASPNETCORE_ENVIRONMENT=Development
dotnet run

How to make it one command? Use launch profile, available in dotnet run command. But to make it work, there needs to be launchSettings.json file with profile definition, i.e. what needs to be done for application to run.

Definition I am using, inspired by web, is this:

{
    "iisSettings": {
        "windowsAuthentication": false,
        "anonymousAuthentication": true,
        "iisExpress": {
        "applicationUrl": "http://localhost:5000/",
        "sslPort": 0
        }
    },
    "profiles": {
        "Dev": {
            "commandName": "Project",
            "launchBrowser": true,
            "launchUrl": "http://localhost:5000",
            "environmentVariables": {
                "ASPNETCORE_ENVIRONMENT": "Development"
            }
        }
    }
}

And to launch it in Dev mode all I have to do is:

dotnet run --launch-profile 'Dev'

And… it fails, saying it cannot find this profile. That is because what I did is place the file next to .csproj file. Seemed obvious. Well, it should not. Where this file needs to be is in Properties directory (\Properties\launchSettings.json). With this layout – it works perfectly fine. fine.

Game graphics scaling

In game I would like every player to have more or less the same game experience. And, most importantly, I want players to have the same chance of winning, and not limiting players’ skills based on their devices etc. One of the things that need to be handled is visible game arena size. Ideally players with ultra HD screens and players with smaller laptop screens with 1600 over 900 for example should all see the same part of game arena, to not make players on bigger screens have it easier to spot enemies. This means there needs to be standard game size defined. But that would mean that some players would have only part of their screen space used for game, while others would have to scroll to see whole content – that makes terrible user experience. Scaling should solve the problem!

OK, first things first. I have to define base game arena size to server as reference. On screen with this resolution scaling factor should be equal to one. This could possibly be any set of two numbers (width and height), not corresponding to any particular screen size, but I’ve decided to base it on my full hd screen, or to be more precise, on my canvas size on Raim page – 1600 over 861 (with some space being taken over by address bar, developer tools, icons etc.).

var originalSize = { x: 1600, y: 861 };
var scale = 1;

Then, it is time to scale canvas on page when size of browser changes, so that resizing window will cause scale to change.

var resizeCanvas = function () {
    var arenaElement = document.getElementById(arenaHandler);
    var widthDiff = originalSize.x - arenaElement.offsetWidth;
    var heightDiff = originalSize.y - arenaElement.offsetHeight;

    var aspectRatio = originalSize.x / originalSize.y;
    var w, h;

    if (Math.abs(widthDiff) > Math.abs(heightDiff)) {
        w = arenaElement.offsetWidth;
        h = w / aspectRatio;
    } else {
        h = arenaElement.offsetHeight;
        w = h * aspectRatio;
    }

    canvas.width = w;
    canvas.height = h;

    scale = canvas.width / originalSize.x;
};

(function init() {
    ...

    var arenaElement = document.getElementById(arenaHandler);
    viewport.x = 0;
    viewport.y = arenaElement.offsetHeight;

    canvas = document.createElement("canvas");
    document.getElementById(arenaHandler).appendChild(canvas);
    resizeCanvas();

    window.addEventListener('resize', resizeCanvas);

    gfx = new raimGraphics({
        canvas: function () { return canvas; },
        viewport: function () { return viewport; },
        arena: function () { return arena; },
        scale: function () { return scale; }
    });

    ...
})();

Calculating the scale is not too hard. There is aspect ratio I want to hold (calculated as width divided by height). Given that, if I have new screen width, I can calculate screen height by dividing width by this ascpect ratio. Holding aspect ratio will ensure that graphics don’t get distorted in any axis (e.g. circles do not turn into elipses). The calculation formula is simply taken from proportion:

newWidth / newHeight = originalWidth / originalHeight
newHeight = newWidth / (originalWidth / originalHeight)

With new scale calculated, drawing graphics is as simple as multiplying every coordinate by this scale, for example:

...
x = player.Position.X + args.viewport().x;
y = player.Position.Y + args.viewport().y;
drawingContext.arc(x * scale, -y * scale, player.Size * scale, 0, 2 * Math.PI);

...

var x = points[0].X + args.viewport().x;
var y = -(points[0].Y + args.viewport().y);
drawingContext.moveTo(x * scale, y * scale);
for (var i = 1; i < points.length; i++) {
    x = points[i].X + args.viewport().x;
    y = -(points[i].Y + args.viewport().y);
    drawingContext.lineTo(x * scale, y * scale);
}

x = points[0].X + args.viewport().x;
y = -(points[0].Y + args.viewport().y);
drawingContext.lineTo(x * scale, y * scale);

Easy! Is that it? Well, no. There is also user input to be taken into account – mouse movement and mouse clicks are used in application and game required coordinates to be handled in game world coordinates, not screen coordinates. So what needs to happen is – mouse coordinates have to be scaled accordingly. Does this mean multiplying by game scale?
No. Since I’ve stretched game twice (for player with big screen) and user clicks in coordinate [10, 10] on the screen, it must be [5, 5] coordinate in game world (remember – game world got stretched two times). It makes sense – if I put stuff onto the screen, I multiply it by scale. If I get it from back the screen, I have to devede the value back by reversing the operations.

var inputChange = function (input) {
    var player = getCurrentPlayer();
    if (player == undefined) return;

    input.mouse.x /= scale;
    input.mouse.y /= scale;
    input.mouse.x = input.mouse.x - viewport.x;
    input.mouse.y = -input.mouse.y - viewport.y;
    ...
}

And just one more fix – in my calculations of viewport I was taking canvas size into account. Since canvas account is no longer a real game world size, I cannot take it into account. But thankfully there is game world size already there – originalSize, so that makes viewport calculation really easy:

viewport.x = originalSize.x / 2 - currentPlayer.Position.X;
viewport.y = -originalSize.y / 2 - currentPlayer.Position.Y;

And now game scales on every window sizes!

Case of player uncertainity, or about double comparison

If anyone tried my code for collision detection, he or she might have noticed that it sometimes behaves unexpectedly. For example object seems to go inside the colliding object, just to be ejected in next frame or two. Quickly, but slowly enough to be noticable glitch, and in case of corners to cause object to skip to the other side. Not good!

Turns out there are two problems. First – When messing around with border of my game arena, I defined for walls. But I did that incorrectly. My algorith depends on polygons to have points defined in clockwise manner. So for example first lower left corner, then upper left, upper right, and finally lower right. When done differently it messes things up for some sides of polygon as the normal axis does not point outside of the polygon – which is needed for collisions to be resolved correctly, moving player outside of polygon.

Second error was more hidden. Took me almost two hours to figure out what is going on. Since player was just going a little bit inside the object and then was correctly resolved (to the correct side of obstacle), I thought that collision code must be OK, and maybe somehow I’m skipping collision detection, or maybe two frames are rendered at the same time, applying double movement to the object, temporarily putting it inside obstacle?

Turns out issue was way simpler. I’ve messed up something that every developer on earth should know by the time he or she writes first application in highshool. And definitely before taking first money for their code. Double comparison. When comparing doubles you can never skip epsilon. If you do, you gonna find yourself figuring out why sometime, for some sides of polygon, collision detection does not work. Or why you suddenly get infinity printed out on the screen during client demo. Been there, done that. This year. Yea, I suck.

Check this code:

var intersectionRange = obstacleProjection.Intersect(objectProjection);
intersectionRange = intersectionRange.Add(axisVector.Item2);

if (intersectionRange.Length < 0.0001)
    return null;

if (intersectionRange.Length < smallestDisplacement.Item2.Length || // new collision is smaller
    (Math.Abs(intersectionRange.Length - smallestDisplacement.Item2.Length) < 0.0001 && // or collision sizes are the same
     Math.Abs(intersectionRange.Start) < Math.Abs(smallestDisplacement.Item2.Start)))    // but collision is closer to polygon side
{
    smallestDisplacement = Tuple.Create(axisVector.Item1, intersectionRange);
}

It does exactly what is needed, right? If new collision is smaller, it needs to be picked as smallest displacement. If not, we check additional stuff. Worked perfectly for the single test polygon I used for my testing. I even picked one that is not aligned with X and Y axes to make sure everything works exactly as it should!

So what is the problem? Sometimes, when conditions are just right, doubles get a little bit unprecise. And then all hell breaks loose. Imagine if two axes are parallel, like in rectangle top and bottom sides, collision displacement counting from 0 axis would be exactly the same, just axis will be different (one pointing down, for bottom axis, and other pointing up, for top axis). So when collision happens form the bottom and bottom side was checked first, it puts smalles displacement. And I go to check other sides, including top one. Displacement is the same there, just for different axis. And if it would be the same – all would be fine. But if double missies precision in calculations and return slightly lower value (say, 2.5118300001 versus 2.5118299991) algorithm decides that it has found new smallest collision side!

Fix is simple, add epsilon value:

var intersectionRange = obstacleProjection.Intersect(objectProjection);
intersectionRange = intersectionRange.Add(axisVector.Item2);

if (intersectionRange.Length < 0.0001)
    return null;

if (intersectionRange.Length < smallestDisplacement.Item2.Length - 0.0001 || // new collision is smaller
    (Math.Abs(intersectionRange.Length - smallestDisplacement.Item2.Length) < 0.0001 && // or collision sizes are the same
     Math.Abs(intersectionRange.Start) < Math.Abs(smallestDisplacement.Item2.Start)))    // but collision is closer to polygon side
{
    smallestDisplacement = Tuple.Create(axisVector.Item1, intersectionRange);
}

Even if there is double error in calculations, small epsilon will make sure that it will not get interpreted as smaller collision size. And what if it actually is better collision? Second part takes care of it, checking if collision is roughly the same size (again, using epsilon) and it is closer to the side.

Uff, problem solved! Never again doing this error. Or, well, hopefully not this month.