Kamranicus

Personal and development blog of Kamran Ayub

about me

Hi, my name is Kamran. I am a web developer and designer, residing in Minnesota. I’ve been programming since 2001 and I am familiar with many different languages, both web-based and server-based. I love to tinker around and think of new ways to solve problems.

Planet Wars AI Competition With C# and Excalibur.js

Planet Wars

This past weekend Erik and I built out a Planet Wars server (written in C#) and an Excalibur.js-powered visualization (written in TypeScript). Planet Wars is an AI competition where you build an AI that competes against another player to control a solar system. A map consists of several planets that have different growth rates and an initial number of ships. You have to send out a “fleet” of ships to colonize other planets and the player who controls the most planets and has destroyed their opponent’s ships wins the game.

At work we are hosting our 6th Code Camp and recently we started hosting an AI competition internally. You can find past competition agents for Ants and Elevators, for example.

The visualization for Planet Wars is fairly simple, made even simpler using the power of Excalibur.js, the engine we work on during our spare time. We basically just use an Excalibur timer to query the status of the game state and update the state of all the actors in the game. For moving the fleets, we just use the Actor Action API.

For the game server, we are using a HighFrequencyTimer to run a 30fps server and then clients just send commands via HTTP, so any kind of agent will work like Python, Perl, PowerShell, or whatever! Anything that speaks HTTP can be a client. The server runs in the context of a website so we can easily query the state using a singleton GameManager. This wouldn’t work in a load-balanced environment but it doesn’t matter since people develop agents locally and we run the simulations on one server at high-speed to produce the results. If you backed the server with a data store, you could replay games but right now there’s only an in-memory implementation.

To keep the server and client models in-sync, we use Typewriter for Visual Studio which is amazing and super useful not just for syncing client/server but also generating web clients, interfaces, etc. from C# code. I plan to write a separate post on some Typewriter tips for Knockout.js and Web API.

written in AI, C#, Excalibur.js, Games, Javascript, Typescript, Typewriter

2015: A Year in Review

2015 was a very eventful (and fulfilling) year for me and my wife. Let’s break it down, shall we?

Living abroad for 6 months

Bordeaux, France

By far the most impressive thing I did last year was to take a 6 month sabbatical and live abroad in France with my wife. Though I’ve written about it previously, I left out the entire part where we chronicled our adventure in a series of publications on Medium. We kept it anonymous during the trip to avoid any potential issues but now that it’s over, I will list the different publications so you can read back through what we did for 6 months (spoiler: we did a lot).

Just to be clear when you’re reading, I am Vincent and my wife is Celeste.

It was an experience I’ll never forget and one that probably won’t be repeated anytime soon. My wife and I both felt it was the right time and that we’d probably get little to no chance at doing something so crazy once we had kids and “settled down.” We still hope to continue traveling once a year or couple years, especially after an experience like that. One of my plans for 2016 is to compile all these posts into a book that we print and keep for us and our future children.

New House

Bay window

We weren’t in a position to buy a house so soon after a 6 month sabbatical but we still thought it was best to move from apartment living to a real house, especially after living in a 400 sq ft space in France. We found a great place to rent in Minneapolis that’s pretty close to both workplaces, friends, and family. We’ve done a few things to it to make it more like home and we’ve been really enjoying it so far and our landlord is superb. The photo above is our enhancement to the bay window. My brother-in-law built the spanning bench between the bookcases and I built the cushion. I removed the tall blinds that covered the window so we could open up the room and add extra seating. It turned out so good!

New Dog

Dogger

My wife has always wanted a dog ever since we moved into an apartment together—except our apartment complex never let us have dogs. We cat-sat (is that a word?) for 2 years for some friends and then they took her back down to Texas where they bought a house and she happily frolicks outside. In August (Dogust?) we went to the humane society and on pretty-much-a-whim took in a cute dog we named Rennes (after one of our favorite French cities we visited). She’s a black lab and border collie mix. She’s awesome even though she jumps the fence to chase squirrels (we’re working on that). We love her a ton.

Keep Track of My Games

KTOMG

2015 marked the 4th year that KTOMG has been around since its humble beginnings and while I was abroad it gave me time to focus and finish a major rewrite of the codebase in May. Since then I’ve released public lists and capping the year off with Steam syncing, just to name a few features.

Speaking

Even though I was abroad for 6 months, I still managed to give a talk this year at Twin Cities Code Camp 19: an update to my popular Demystifying TypeScript presentation. You can also find 2014’s version on YouTube.

Making Games

Minotaur

In August, I participated in the Ludum Dare 33 game jam where some friends and I created a minotaur hack-n-slash game, Crypt of the Minotaur (source). I love participating in game jams and by extension, helping to contribute to the Excalibur.js game engine.

Playing Games

Somehow after all that I still managed to log hundreds of hours into my gaming habit. Since I added public lists to KTOMG, why don’t you go take a look at my Top 10 Played Games of 2015? Yes, some of those came out in 2014 but I didn’t play or finish them until this year. Being abroad, I managed to bring my laptop, 3DS, and PS4 so I played a lot of Destiny, finished the remastered Grim Fandango and other PS4/3DS games. My laptop wasn’t that great but I was able to still enjoy Pillars of Eternity, a throwback Baldur’s Gate-style RPG. In November, I started playing Fallout 4 and have since logged over 75 hours in it. It’s definitely tough to juggle both hobbies: playing games and developing a site that helps manage those games. During time off, I usually try to mix my time between them to satisfy both needs and sticking to a monthly release cadence helps a lot to prioritize work.

Work & Friends

My work has been going swimmingly, after my sabbatical I returned to work on a team with one of my best friends. Speaking of friends, I made more this year, fulfilling a goal I made at the start of 2015—not only abroad but also at home. Board game nights, Dungeon World sessions, and a Star Wars marathon are just some of the highlights of the fun stuff we’ve done with our [awesome] circle of friends.

Looking towards a new year

Cheers to 2016, let’s hope it’s even bigger and better than 2015 and brings more happiness and joy to my life.

written in Accomplishments, Life, Year in Review

Influencing Your Kudu Deployment Through Git Commit Messages

If you’re on Windows Azure and using continuous deployment through Git, you may not know you are using an open source platform called Kudu behind-the-scenes that performs your deployment. If this is the first time you’ve heard of Kudu and you’ve been using Azure for awhile, it’s time to get acquainted. Kudu is amazing. It has a whole REST API that lets you manage deployments, trigger builds, trigger webjobs, view processes, a command prompt, and a ton more.

You can get to your Kudu console by visiting

https://<yoursite>.scm.azurewebsites.net

The .scm. part is the key, as that is where the Kudu site is hosted.

Customizing Kudu deployments

One of the other things it offers is a customized deployment script. I’ve customized mine because I have a test project where I run automated tests during the build. This is useful since it’ll fail the build if I make any changes that break my tests and forces me to keep things up-to-date resulting in a higher quality codebase.

If you want to generate your own script, it’s pretty easy. Just follow the steps outlined here. For example, after customizing my script here’s what my section looks like to run my tests:

1
2
3
4
:: 3. Build unit tests
call :ExecuteCmd “%MSBUILD_PATH%” “%DEPLOYMENT_SOURCE%\src\Tests\Tests.csproj” /nologo /verbosity:m /t:Build /p:AutoParameterizationWebConfigConnectionStrings=false;Configuration=Release /p:SolutionDir=“%DEPLOYMENT_SOURCE%.\” %SCM_BUILD_ARGS%

IF !ERRORLEVEL! NEQ 0 goto error

All I really did was copy step 2 in the script that builds my web project and just change the path to my tests project.

Finally, I run the tests using the packaged Nunit test runner (checked into source control):

1
2
call :ExecuteCmd “%DEPLOYMENT_SOURCE%\tools\nunit\nunit-console.exe” “%DEPLOYMENT_SOURCE%\src\Tests\bin\Release\Tests.dll” /framework:v4.5.1
IF !ERRORLEVEL! NEQ 0 goto error

Simple!

Now the fun part

One thing you’ll notice if you start running tests on your builds is that this starts to slow down your continuous deployment workflow. For 90% of the time this is acceptable, after all, you can wait a few minutes to see your changes show up on the site. But sometimes, especially for production hotfixes or trial-and-error config changes, that 3-5 minutes becomes unbearable.

In cases like this, I’ve set up a little addition to my script that will read the git commit message and take action depending on what phrases it sees.

For example, let’s say I commit a change that is just a config change and I know I don’t need to run any tests or I really want the quick build. This is what my commit message looks like:

[notest] just changing App.config

That phrase [notest] is something my script looks for at build time and if it’s present it will skip running tests! You can use this same logic to do pretty much anything you want. Here’s what it looks like after step 3 in my script:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
:: Above at top of file

IF NOT DEFINED RUN_TESTS (
   SET RUN_TESTS=1
)

:: 4. Run unit tests
echo Latest commit ID is “%SCM_COMMIT_ID%”

call git show -s “%SCM_COMMIT_ID%” —pretty=%%%%s > commitmessage.txt
SET /p COMMIT_MESSAGE=<commitmessage.txt

echo Latest commit message is “%COMMIT_MESSAGE%”

IF NOT “x%COMMIT_MESSAGE:[notest]=%”==“x%COMMIT_MESSAGE%” (
   SET RUN_TESTS=0
)

IF /I “%RUN_TESTS%” NEQ “0” (
  echo Running unit tests
  call :ExecuteCmd “%DEPLOYMENT_SOURCE%\tools\nunit\nunit-console.exe” “%DEPLOYMENT_SOURCE%\src\Tests\bin\Release\Tests.dll” /framework:v4.5.1
  IF !ERRORLEVEL! NEQ 0 goto error
) ELSE (
  echo Not running unit tests because [notest] was present in commit message
)

Alright, there’s definitely some batch file black magic incantations going on here! So let’s break it down.

echo Latest commit ID is "%SCM_COMMIT_ID%"

Kudu defines several useful environment variables that you have access to, including the current commit ID. I’m just echoing it out so I can debug when viewing the log output.

call git show -s "%SCM_COMMIT_ID%" --pretty=%%%%s > commitmessage.txt
SET /p COMMIT_MESSAGE=<commitmessage.txt

Alright. This took me some real trial and error. Git lets you show any commit message and can format it using a printf format string (--pretty=%s). However, due to the weird escaping rules of batch files and variables, this requires not one but four % signs. Go figure.

Next I pipe it to a file, this is only so I can read the file back and store the message in a batch variable (COMMIT_MESSAGE), on the next line. Kudu team: It would be sweet to add a SCM_COMMIT_MESSAGE environment variable!

IF NOT "x%COMMIT_MESSAGE:[notest]=%"=="x%COMMIT_MESSAGE%" (
   SET RUN_TESTS=0
)

Okay, what’s going on here? I’ll let StackOverflow explain. The :[notest]= portion REPLACES the term “[notest]” in the preceding variable (COMMIT_MESSAGE) with an empty string. The x prefix character guards against batch file weirdness. So if [notest] is NOT present, this will return true (the strings match). If it is present, the condition will be false and so we do IF NOT since we want to execute when that is the case.

If [notest] is present in the message, we set another variable, RUN_TESTS to 0.

IF /I "%RUN_TESTS%" NEQ "0" (
    echo Running unit tests
    call :ExecuteCmd "%DEPLOYMENT_SOURCE%\tools\nunit\nunit-console.exe" "%DEPLOYMENT_SOURCE%\src\Tests\bin\Release\Tests.dll" /framework:v4.5.1
    IF !ERRORLEVEL! NEQ 0 goto error
) ELSE (
    echo Not running unit tests because [notest] was present in commit message
)

If RUN_TESTS does not evaluate to 0, then we run the tests! Otherwise we echo out an informative message as to why it was skipped.

Phew. So how much time do we save on [notest] builds now?

No test build

Compared to a build with tests:

Build with tests

So that flag cuts the build in half! Nice! There are probably some other ways to improve the time. By the way, if you’re wondering what’s taking so long in your build, you can use the Kudu REST endpoint to see your deployment logs (/api/deployments endoint) which contain full timestamp information!

Happy continuous deployment!

written in Continuous Deployment, Continuous Integration, Git, Kudu, Testing, Windows Azure

Impersonating a User During Automated Testing Scenarios

I’m starting to introduce privacy controls to Keep Track of My Games and I ran into the following scenario when writing my tests:

1
2
3
4
5
Scenario: Anonymous user should be able to view a public custom list
  Given a user has a list
  And a user’s list is public
  When I request access to the list
  Then I have read-only access

In this context, I am the anonymous user. This is the exact SpecFlow scenario I wrote. Do you know why I may have run into issues?

Let’s look at the first two steps:

1
2
3
4
5
6
7
8
9
10
[Given(@"a user has a list")]
public void GivenAUserHasAList() {
    listResult = context.ListService.CreateList(newList)
}

[Given(@"a user's list is public")]
public void GivenAUsersListIsPublic() {
  privacySettings.Level = PrivacyLevel.Public;
  context.ListService.UpdateListPrivacy(listResult.Id, _privacySettings);
}

Why would this cause a problem with my given scenario?

  1. In the first step, I’m creating a new list.
  2. In the second step, I’m taking the new list I just made from the first step and updating the privacy settings on it.

The problem is that my service assumes the context is an authenticated user and will apply changes to the current user’s list. Well, since I did not call my login helpers before these two steps, I am in an anonymous context so the service calls fail. That’s good! But how can I tell my steps to call a service method on behalf of another user without having every step use the current user context?

You might say I should just create a new method that accepts a username and refactor my methods. I could do that but not only is my entire service designed around the current user context, my service layer is essentially the interface of my public API. I would never allow one user to create a list for another user (unless that was a feature). So the same way I wouldn’t expose an API method to do something on behalf of someone, I won’t add a public method in my service layer to do the same. I could choose to make the method private or internal and grant access to the assembly for testing—true, I could but that seems like a workaround where I need to expose special functionality just for testing.

The approach I ended up doing was simpler and more elegant and leveraged an existing pattern I was relying on: injecting an IUserContext into my service layer like this:

1
2
3
public ListService(IUserContext userContext) {
  _userContext = userContext;
}

This is using standard dependency injection (Ninject, in my case) to inject a context for the current user. That context gets created and maintained outside this class, so it doesn’t care who provided it or where it came from, it just uses it to determine business logic.

So since I’m already injecting the current user context and mocking it in my tests, why not simply swap out the context when I need to?

Creating an impersonation context

That’s what I ended up doing. Here’s my implementation of a TestingImpersonationContext (https://gist.github.com/kamranayub/9654d6581fbcf63cf481):

It should be clear what’s happening but let me explain further. Specifically in SpecFlow you can inject a context into your testing steps like so:

1
2
3
4
5
6
7
public class StepBase : TechTalk.SpecFlow.Steps {
    protected TestingContext context;
    public StepBase(TestingContext context)
    {
        context = context;
    }
}

As long as your step classes inherit that StepBase, you have access to a context. All I did was build a method off that context that swapped out my existing dependency that was injected for IUserContext with a temporary context that impersonated the requested user. Once it is disposed, it restores the original context. Easy as pie!

If you are not using SpecFlow which is probably the case, don’t fret—all you really need is a class or helper method that you can access in your test classes. However you want to achieve that is up to you. Create a base class, don’t even bother with dependency injection, etc. This is entirely doable without DI but since my app relies on it I also leverage it during testing.

Now given we have an impersonation context helper, here’s how our two testing steps have changed:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[Given(@"a user has a list")]
public void GivenAUserHasAList() {
  using (context.Impersonate("user") {
    listResult = context.ListService.CreateList(newList);
  }
}

[Given(@"a user's list is public")]
public void GivenAUsersListIsPublic() {
    using (context.Impersonate("user"))
    {
        privacySettings.Level = PrivacyLevel.Public;
        context.ListService.UpdateListPrivacy(listResult.Id, _privacySettings);
    }
}

I could even update my scenario to be specific about who’s list I’m accessing (so it’s not ambiguous between logged in user vs. another user) but since I only have two users in my testing context, it doesn’t really matter.

Now for the test results:

1
2
3
4
5
6
7
8
Given a user has a list
–> done: ListSteps.GivenAUserHasAList() (0.2s)
And a user's list is public
–> done: ListSteps.GivenAUsersListIsPublic() (0.0s)
When I request access to the list
–> done: ListSteps.WhenIRequestAccessToTheList() (0.1s)
Then I have readonly access
–> done: ListSteps.ThenIHaveReadAccess() (0.0s)

The tests are green and now I’m a happy coder. By the way, if you aren’t using SpecFlow for .NET you should consider it, I love it.

written in .NET, C#, Keep Track of My Games, SpecFlow, Testing

Using Azure CDN Origin Pull With Cassette

For the October update for Keep Track of My Games I wanted to offload my web assets to a CDN. Since I’m already using Microsoft Azure to host the site, I decided to use Azure CDN.

I set it up for “Origin Pull” which means that instead of uploading my assets to the CDN (Azure Blob storage), you request a file from the CDN and Azure will go and get it from your website and then cache it on their servers.

So as an example:

1
2
3
4
5
6
User requests http://az888888.vo.msecnd.net/stylesheets/foo.png
|
|
CDN: have I cached “stylesheets/foo.png?”
  Yes: Serve content from edge cache (closest to user)
  No: Request http://yourwebsites.com/stylesheets/foo.png and serve

You can read more about how to set up origin pull in Azure CDN. In my case, I used “Custom Origin” of “http://keeptrackofmygames.com”.

Using CDN with Cassette

I use the .NET library Cassette for bundling & minification for KTOMG—when I started KTOMG there was no Microsoft provided option and Cassette has been really stable.

It works pretty much as you’d expect:

  • Define “bundles” which are sets of scripts/stylesheets
  • Render bundles onto page(s)
  • If debug mode, render individually otherwise minify and concatenate

By default, Cassette will render URLs like this in your source code:

In debug mode:

1
2
3
4
5
Bundle: ~/Content/core

– /cassette.axd/asset/Content/bootstrap.css?hash
– /cassette.axd/asset/Content/site.css?hash
– /cassette.axd/asset/Content/app.css?hash

And in production:

1
/cassette.axd/stylesheet/{hash}/Content/core

But if we want to serve assets over the CDN, we need to plug in our special CDN URL prefix—not only for script/stylesheet references but also references to images in those files.

Luckily, Cassette provides a facility to modify generated URLs by letting you register a IUrlGenerator. Here’s my full implementation of this for my CDN:

As you can see, I register a custom IUrlGenerator and a custom IUrlModifier. The default IUrlModifider is Cassette’s VirtualDirectoryPrepender and it just prepends “/” to the beginning of every URL but in our case we want to conditionally prepend the Azure CDN endpoint in production.

In production, this will produce the following output:

To allow local debugging and CDN in production I just use an app setting in the web.config. In Azure, I also add an application setting (CdnUrl) through the portal in my production slot with the correct CDN URL and voila—all my assets will now be served over CDN.

Notes

  • Azure CDN does not yet support HTTPS for custom origin domains. So if you want to serve content over http://static.yoursite.com you can’t serve it over HTTPS because Azure doesn’t allow you to upload or set a SSL certificate to use and insteads uses their own certificate which is not valid for your domain. Vote up the UserVoice issue on this.

  • Azure CDN origin pull does not seem to respect Cache-Control: private HTTP header. For example, by default MVC serves pages with private cache control which means browsers won’t cache that page and neither should Azure CDN—but it does anyway. In my case, I really don’t want a true mirror of my site, I just wants assets served over CDN and Cassette sets Cache-Control: public on them automatically. You can upvote my feature request on UserVoice.

  • I am choosing not to point my entire domain to the CDN. Some folks choose to serve their entire site over the CDN which is definitely something you can do. However, in my case, I didn’t want to do that. If you instead chose to point your domain to the CDN endpoint, you don’t need to do any of this—everything will be served over the CDN.

written in .NET, Azure, C#, Keep Track of My Games

PowerShell Script to Generate an HTML5 Offline Manifest

In my new role at work I’ve been learning PowerShell to administrate our systems (I’m a half developer, half sys admin monster). I’ve been a developer for a long time and been living in .NET for about as long—I still had not really embraced PowerShell as something I could use in my daily development routine. I’ve changed my tune. PowerShell is awesome. It’s also not too hard to pick up once you learn how it works. I recommend you take a serious look at learning it. I recommend following the PowerShell 3 Jumpstart course and trial and error.

Anyway, for some of the games we write as part of Excalibur.js for game jams we would like to run them offline. To do this, you need to create an HTML5 Application Manifest file. However, this file is super finicky, as outlined in the linked article. In order to assist, I wrote a small PowerShell script that generates an appcache manifest file with each file’s MD5 checksum. Therefore, the manifest file will only change when dependent assets change. I do some more work to disable it locally and only enable for release, but you can run this script as part of your build.

Modify the script to be specific to your project and it should output an appropriate manifest file. Feel free to change as you see fit.

written in HTML5, PowerShell, Tips & Tricks

Use Special Gmail Addresses to Redirect and Filter Incoming Mail or Bypass Unique Email Checks

If you have a Gmail account, there’s a sweet feature you might not know about.

Let’s say your email is:

johndoe@gmail.com

First tip: adding dots (.) does not change the email. The following emails are identical to Google and will route email to johndoe@gmail.com:

  • john.doe@gmail.com
  • john....doe@gmail.com
  • etc.

Next tip: you can add a plus sign to the end of your email and then type in whatever you want. It’ll still get sent to you. You can use this to your advantage by filtering mail sent to that specific tagged address or you can bypass “unique email” checks on websites but still get email sent to you:

  • johndoe+medium.com@gmail.com
  • johndoe+microsoft.com@gmail.com
  • johndoe+spam@gmail.com

So you could filter incoming mail based on those addresses above. This comes in really handy for sites you don’t care about but still need at least one email from them—the rest can be filtered. It can also be useful to identify spam email sources—who leaked your email? If you saw spam addressed to johndoe+spamsite@gmail.com, you know spamsite was responsible for leaking/sharing your email.

Happy power-Gmailing!

written in Gmail, Tips & Tricks

[Updated] Install Windows 10 Immediately Before Rollout

Update (8:49pm): I adjusted my Windows 8 date/time to tomorrow and the progress of the update jumped and I’m completed now. I now see a Restart PC to finish installing updates.

Update (9:00pm): Well it looks like it’s a bust with Windows 8.1. My friend tested on Windows 7 and it worked but mine refuses to install—it just says I have it reserved and it’s ready. I tried rebooting multiple times and running the /updatenow command again but no go.

image

Update (10:00pm CST): No luck on my other PC, same situation. Guess I’ll just have to wait in line like everybody else!


This is only applicable for the next few hours until your machine gets Windows 10 rolled out. If you’re impatient like me, a friend tipped me off that he was able to install Windows 10 prematurely by simply forcing Windows Update to download Windows 10 and then setting his system time forward a day (BIOS, I’m thinking).

It’s kind of unbelievable but it’s working so far. I’m at 95% complete downloading (you can view in Windows Update window).

Progress

  1. Hit Windows+R to bring up Run command
  2. Type in wuauclt.exe /updatenow (Works)
  3. Wait for the download to finish (Control Panel –> Windows Update) (Works)
  4. When Windows Update says, “Preparing for installation…”, set system time forward a day in Windows (Works)
    image
  5. When progress is done, reboot (Untested)
  6. Windows 10 should install (Untested)

I will update this post with any new information.

written in Windows 10