skip nav


Fixing Linux Audio, Spring '21 edition

When upgrading from Ubuntu 20 to 21 this spring, my father and I both had the rear audio line out on our computers stop working. As we both had the same problem, I figured I would write it up in case anyone else was affected.


  1. The original audio device no longer shows up in the list of audio devices.
  2. The device is not turned off or muted, and cannot be turned on or unmuted, because it doesn't exist any more.
  3. Other devices (USB headphones, Bluetooth, etc.) still work.
  4. When no other audio device is attached, a dummy audio device is created.
  5. The troublesome audio device shows up when running some commands, I think such as pacmd list-sinks.


  1. Running sudo lsof /dev/snd/* in a console shows Timidity holding open some files.
  2. Ending the timidity process makes the audio device show up again.


  1. Uninstall Timidity using your software manager, or by running sudo apt remove timidity.

Somewhat surprisingly, this does not seem to affect my ability to play back Midi files. I assume my music program is using Fluidsynth, which is a separate system? Nonetheless, a strange bug caused by a misconfiguration somewhere.

tags: linux, sound, audio, fix, tech support, workaround

Shared Array Buffers With WASM

Yesterday, I solved a long-standing question I'd had - how do you get data out of a WebAssembly program without having to copy it back? Ideally, in such a way that a web worker wouldn't have to copy it back to the main thread either. I've been able to find some information on this around the web, but much of it seems to be rather outdated or does not address the issue. I decided to have a crack at it myself and figure out the state of the art by writing a small proof-of-concept.

Version 1

My first approach was to try to create the web worker using a SharedArrayBuffer backing its code. As a bonus, we should be able to redefine bytecode on the fly then which will be fun.

Copying from Depth-First's excellent guide (read it before this post), we arrive at something like this:

(async () => { const memory = new WebAssembly.Memory({ initial: 1 }); const log = (offset, length) => { const bytes = new Uint8Array(memory.buffer, offset, length) const string = new TextDecoder('utf8').decode(bytes); console.log(string) }; //Blob generated with compile with `wat2wasm hello.1.wat --enable-threads --out /dev/stdout | base64 --wrap 0` const unsharedData = new TextEncoder().encode(atob(` AGFzbQEAAAABCQJgAn9/AGAAAAIZAgNlbnYGbWVtb3J5AgABA2VudgNsb2cAAAMCAQEHCQE FaGVsbG8AAQoKAQgAQQBBDRAACwsTAQBBAAsNSGVsbG8sIFdvcmxkIQ== `)) const sharedData = new Uint8Array(new SharedArrayBuffer(unsharedData.length)) sharedData.set(unsharedData) sharedData[sharedData.length - 1] = '?'.charCodeAt() const { instance } = await WebAssembly.instantiate(sharedData, { env: { log, memory } }); instance.exports.hello() sharedData[sharedData.length - 1] = '.'.charCodeAt() instance.exports.hello() })()

Here, we start by defining some WebAssembly memory to pass args around with. (The initial value is in number of 64-KiB pages to allocate.) We then define a function, log, which will take this memory and print the contents using console.log(…). We'll call this from our WASM code, which we've serialised in this case as a base64 string. (The source of which is hello.1.wat, compiled using wat2wasm from WABT.)

To get our shared memory, we create a new array backed by a SharedArrayBuffer. In JS, all the typed arrays have a backing buffer. Usually, by default, it's an ArrayBuffer. Amusingly enough, an ArrayBuffer can be shared between multiple typed arrays, even of different types. The SharedArrayBuffer is called so because it can be passed between web workers without copying as well, which a regular ArrayBuffer can't do. This is the behaviour we're after.

So, let's test it! First, we'll set the final byte of our WASM program to ?, from it's original value of !, to prove we're loading the right memory and can manipulate it. Then, we start the WebAssembly program and call the hello() function of the instance we created. This in turn calls our log(), which prints "Hello, world?".

(Note: WebAssembly.instantiate(…) will also let you pass in an ArrayBuffer/TypedArrayBuffer, in addition to the Uint8Array we have here… in Firefox and not in Chrome.)

Now we modify our memory again, this time changing the final byte to .. However, calling into hello again, we find we get the same output, "Hello, world?". We can't just poke the memory of a running WASM program, it would seem - probably for the best. So, what do we do now?

Version 2

We have one other memory-buffer-ish object we can tweak. Let's see if we can't get that initial const memory = … declaration to be a shared buffer, instead of an unshared buffer. Some brief searching later, and we find that WebAssembly.Memory can indeed take a shared flag. It's not very well supported, but let's rework our code to try to test it anyway. (I believe the shared flag is part of the WebAssembly Threads system, which seems to just be referring to using shared memory to communicate between workers VS message passing.)

(async () => { const memory = new WebAssembly.Memory({ initial: 1, maximum: 1, shared:true }); let globalBytes = null const log = (offset, length) => { const bytes = new Uint8Array(memory.buffer, offset, length) globalBytes = bytes //Can't use TextDecoder because it doesn't handle shared array buffers as of 2021-04-20. //const string = new TextDecoder('utf8').decode(bytes); const string = bytes.reduce( (accum,byte)=>accum+String.fromCharCode(byte), '') console.log(string) }; //Blob generated with compile with `wat2wasm hello.2.wat --enable-threads --out /dev/stdout | base64 --wrap 0` const wasm = new TextEncoder().encode(atob(` AGFzbQEAAAABCQJgAn9/AGAAAAIaAgNlbnYGbWVtb3J5AgMBAQNlbnYDbG9nAAADAgEBBwk BBWhlbGxvAAEKCgEIAEEAQQ0QAAsLEwEAQQALDUhlbGxvLCBXb3JsZCE= `)) const { instance } = await WebAssembly.instantiate(wasm, { env: { log, memory } }); instance.exports.hello() globalBytes[0] = '\''.charCodeAt() instance.exports.hello() })()

With our new memory declaration returning a shared buffer… on most non-Apple desktop browsers… 😬 we can now test this method of memory manipulation. We immediately find three things:

  1. Our WASM program needs to have it's memory declaration updated too, yielding hello.2.wat and a new base64 blob.
  2. TextDecoder doesn't accept a SharedArrayBuffer, so we have to write our own little routine here. I guess this is because, as the bytes could change at any time, we could potentially output invalid utf-8 as our data shifted under us. We don't care for this single-threaded demo, but it would be an issue normally.
  3. We must capture the newly-shared text buffer in the callback (as globalBytes), so we won't bother manipulating it before we instantiate our WebAssembly program.

To test this, we call hello() again, which sets globalBytes to our "Hello, world!" message. We now set the first character to an apostrophe, and call in to our function again to test if we were able to set data visible to WASM. It prints "'ello, world!", thus demonstrating we are! Since we're working with a SharedArrayBuffer here, we can share this reference across threads to get fast, efficient data transfer.

tags: wasm, js, web dev

Same Script, Multiple Tags

Today, I happened on a fun quirk of web-dev. What happens if you include the same script twice, say like this:

<!DOCTYPE html> <html lang="en"> <head> <script src="test.js" type="module"></script> <script src="test.js" type="module"></script>

test.js only runs once, despite being included twice. It seems modules have a specific trait where they're only ever evaluated once, I believe specified in steps 4 and 5 of ECMAScript 6.0 section

Nothing good ever comes of running scripts multiple times. I'm glad it's out.

P.S.: Fun fact: You can still document.write(…) in an ECMAScript module, which just feels… wrong. (At least the with statement is history there, as modules are always in strict mode.)

tags: js, quirk, microblog, web dev

Graphics Card Exposure Test

Sometimes, your graphics card's details are exposed to the web. Yours are not right now, though, which is good. Try opening this page in different browsers – I found Chrome had much more information than Firefox, for example.

The Code

//Function to grab GPU data. const getGPU = () => { const ctx = document.createElement('canvas').getContext('webgl') const ext = ctx.getExtension('WEBGL_debug_renderer_info') return { card: ctx.getParameter(ext.UNMASKED_RENDERER_WEBGL), vendor: ctx.getParameter(ext.UNMASKED_VENDOR_WEBGL), } } //Put the GPU data into the web page. const gpu = getGPU() document.getElementById('graphics-card-test-output') .textContent = `Yours is ${gpu.vendor}'s ${gpu.card}.` tags: webgl, privacy, demo, web dev

Moving Domains

On Feb 23, 2012, I added a test page to my personal website at, using Github Pages. Three years later, I purchased and started redirecting the domain name to, which Github had migrated Pages to after some security issues arising from having them on a subdomain of the main site. A few years after that, I started hosting my website myself on a VPS at Cloud at Cost. I kept the Github Pages in case anything went wrong. Ultimately, it did - the VPS service slowly died of oversubscription a few months ago. (It was an excellent service for many years—and for less than a dollar a month!)

As my web server slowly fell out from under me, taking upwards of half a minute to run ls, I retreated to my living room where this site is hosted now. I had deployed a highly-accessible DnD-style dice roller, 🎲, but it required a Node-based service running on the back-end to perform the rolls.

While I'd known it for some time, this really drove home that is no longer hostable entirely on Github Pages. I need to maintain a server running it, so there's no reason to keep around. The code for will remain happily on Github itself, but starting today any requests to a web page on Github Pages will redirect to the equivalent page on

As I don't have a whole lot of control over how Github Pages serves my content, I've replaced the content of each HTML file with a meta redirect and deleted most of the other files. This means that, as far as I can manage, the URLs that have been linked around the web should still work. Cool URLs don't change.

Moving forward, I should get the releases for my old projects (River Run, Cube Trains, and the Open Pixel Platformer demo) out of this repo. Github has the concept of releases now, and I would like to use that instead of the repo for my website. The files will still be hosted on this website as insurance against something going missing, of course, since you only own something if it's on your computer. And, of course, I'd like to update this blog a little more. I need a reason to implement pagination!

tags: meta, web dev, github

Web Security: Should CSP be set for my HTML files only, or my HTML files and all my assets?

Yes. "Some web framework automatically generate html on error pages and we found xss issues in those in the past, so setting CSP on everything is best." --ulfr from Moznet

tags: csp, web dev, xss, microblog

Balancing Braces

I chanced to read an article this morning on Ben Aston's blog, which dealt with balancing braces. You should go read it now, since it's good and since I'm about to critique the heck out of his approach.

The task Ben sets himself is simple: write a Javascript function, isBalanced, that returns true if a set of braces is balanced. Running his solution through jshint, we find it has 13 statements with a cyclomatic complexity score of 6. I think it is an inelegant solution for an elegant problem, as my solution came in at 4 statements with a complexity score of 2. (And mine doesn't repeat the characters to be matched either. )


This can be looked at not as a parsing problem, but as a pattern-matching problem instead. You have to think about it recursively.

The key insight we will use is that this problem is as easy as repeatedly removing pairs of braces, such as () or []. There is no case where the braces could be balanced without a pair of braces occurring in the string. By repeatedly removing all the paired braces, we will end up with either an empty string or the non-matching braces.

Let's work this through. '[{}]' has one pair of braces we can spot - {}. Remove it and we are left with '[]' and another pair, [], which we can also remove. Are there any characters left? No? Then the input was balanced. However, '[{]}' has no pairs of braces, and does have characters left, so it's unbalanced. '()[' reduces to '[', which is likewise unbalanced.


function isBalanced(braces) { do { var matches = braces.split(/\(\)|{}|\[]/); braces = matches.join(''); } while (matches.length > 1); return !braces; }

Live Example: All tests passing.

Given this solution, Ben's statement that "to solve this problem we need to have a function that will return the opener corresponding to a closer" is proven false. We have no such function, and we have solved the problem.


In our code, there's a lot of things we don't have to worry about. We don't have to worry about inverting characters. We don't have to worry about queues. We don't have to worry about what the schema is, or even what a schema is. We just remove characters from a string.

This post is a lot shorter than Ben's, because my solution is a lot simpler. Often you do not have to think of the right data structure, but instead spend some time thinking of the right algorithm. My solution is simpler, but perhaps Ben's is constrained by an outside influence, not stated in the problem - say, the user of the function will want an error message pointing to the mismatched brace. While we should always strive for the clearest code, it's worth remembering that sometimes it was done that way for a reason.

tags: code golf, response

Fixing Wacom Tablets for Hearthstone in Windows

As of 2016-10-29, there is some sort of bug with Hearthstone where it will ignore clicks coming from a tablet. A quick search turns up complaints, but no solution:

  1. Wacom tablet pen support has been removed with patch 3.1.010357
  2. can't use mouse pen after update
  3. Wacom Tablet not working on Hearthstone

AutoHotKey for Windows has no such issues. And it can send mouse inputs that hearthstone can read… So, we'll make a new AutoHotKey script that clicks the left mouse button when the left mouse button is clicked. After installing AHK, make a new file with Notepad called wacom_echoer.ahk with the following contents:

;Map the left mouse button to the left mouse button. This makes Hearthstone, among other ;games, "see" it. #NoEnv ; Recommended for performance and compatibility with future AutoHotkey releases. SendMode Input ; Recommended for new scripts due to its superior speed and reliability. SetWorkingDir %A_ScriptDir% ; Ensures a consistent starting directory. return LButton::LButton
Edit: Fixed initial comments. Edit: It also fixes Cities: Skylines!

Double-click to run. Hearthstone should now work as expected.

I also followed these instructions to disable the click rings in Windows.

tags: wacom, tech support, hearthstone, hack, workaround, ahk

A Case Against Text Templating

I recently had to deal with a mature PHP-based application. While quite well maintained, the code was suffering heavily from the effects of greedy serialization. Most templating languages, PHP included, work by going along and imperatively creating a long string of text. Echo this, buffer that, and concatenate the whole shebang into an outgoing HTML file. This is a powerful approach, beautiful in the simplicity of its method, and infinitely embeddable.

It is also a a trap.

Say you are creating a function to make a "breadcrumb trail" style nav bar. It's used on every page of your app, and it's pretty standard. You have a few ordered fields such as company, contact, and job; and a few custom ones which will be passed in as an argument. The function signature looks like getBreadcrumb(context[, extraListItems]).

Some time later, you find you need to give a custom ID to one of the standard breadcrumb links. So, you go back to the getBreadcrumb() function and add in another parameter, a list of IDs corresponding to the produced list of elements. Now the function looks like getBreadcrumb(context[, extraListItems[, listItemIDs]]).

After the ID mechanism is firmly entrenched in your codebase, you are asked to add a class to the second breadcrumb item. Because you're a good programmer, and can see where this is headed, you make the mechanism generic. You can now pass in a list of maps such as {id: 'foo', class: 'bar baz'}. Now, the signature looks like getBreadcrumb(context[, extraListItems[, listItemIDs[, listItemAttributes]]]).

A few months later, a bug report comes in. On one page in, say, the Financials module, the breadcrumb is showing up as being in the Reports module and people can't navigate back to their financials. When you look at it, you find it's a simple problem - it's a financial report. The fix adds in another parameter to the getBreadcrumbList function called 'hideAutomaticBreadcrumbs'.

Now we have a beast of a function with five arguments, one of which is redundant. (Many calls look like getBreadcrumb(this->context, null, null, [[], [], ['id'=>'currentJobName']]).) It's not even so much a big function as it is an awkward one. It's hard to test as well. What's more, the pattern has repeated itself across most of the code. Some of the functions even have more than a dozen arguments! How do we avoid this? At every step, adding one more arg seemed like the right thing to do.

The problem with the input of the function was actually caused by the output of the function. getBreadcrumb() returns strings of HTML, which are written to the document we're generating. However, this means that getBreadcrumb() is the final place we can manipulate and change the breadcrumbs. To fix this, let's try re-running our scenario. However this time, instead of returning HTML to be echo'd to the document, we will generate a tree structure to represent our HTML. Unlike the text representation, the tree will have all the relevant attributes open to manipulation later in the program. So, we might access the breadcrumb trail via html.body.topNav.breadcrumb. The second breadcrumb item might be accessed via something like[1], where breadcrumb is an ordered list.

Now that the structure is available to us, the custom breadcrumb id argument is thrown away – never existed – because we can simply go breadcrumb[1] = "customID". We don't have to generate the list with customID, because we can change it after we've generated it. Same thing with the attributes list we added when we needed classes.

The bug report about one of the list items being incorrect is similarly solved by modifying the offending item in place, as a one-off change.

Now the getBreadcrumb(context[, extraListItems]) call is manageable. Special cases are handled locally, and the arguments list is not polluted with redundant behaviour. By discarding the structural information of our HTML document only after we're done creating all of it, we are free to manipulate it as we need to. Because we don't have to write a function to generate the perfect breadcrumb trail every time, we are able to write a simple one that just generates a good default breadcrumb trail.

tags: rant, html, templating, php, web dev

Calculating a Bounce

Problem: Given a ball and a curved wall, how do we calculate the angle of the bounce of the ball? Assuming we have the normal of the wall at the bounce location, our problem becomes:

Problem: Given two vectors, x₁ and n, how do we mirror vector x₁ around vector n to get x′? (x₁ is the ball velocity and n is the normal of the wall.)

Solution: Implement (The first picture is accurate to the situation.)

As written: x₁′ = -x₁ + 2x₀ + 2n̂[(x₁-x₀)·n̂]

Given that x₀ is always [0,0], it can be ignored.

x₁′ = -x₁ + 2n̂[x₁·n̂]

Given that n is pre-normalized, we can un-hat the ns.

x₁′ = -x₁ + 2n[x₁·n]

To calculate the dot product: (from

x₁′ = -x₁ + 2n[x₁[0]*n[0]+x₁[1]*n[1]]

Normalize the notation, since we're now using [0] to get the vector components.

x₁′ = -x₁ + 2*n*(x₁[0]*n[0]+x₁[1]*n[1])

Now, to calculate both parts of the vector separately:

x₁[0]′ = -x₁[0] + 2*n[0]*(x₁[0]*n[0]+x₁[1]*n[1])

x₁[1]′ = -x₁[1] + 2*n[1]*(x₁[0]*n[0]+x₁[1]*n[1])

Now you can replace the x₁ and n with the variables of your program, and be on your way. For example, in Javascript:

//Returns vector v mirrored around the normalized vector mir. function vReflectIn(v,mir) { return [ -v[0] + 2*mir[0]*(v[0]*mir[0]+v[1]*mir[1]), -v[1] + 2*mir[1]*(v[0]*mir[0]+v[1]*mir[1]), ]; }
tags: math, bounce, physics

Batch Converting Gif Animations to Spritesheets with ImageMagick

During some recent work on the Open Pixel Platformer, I had many .gif animations which I needed to make into spritesheets. To convert them all, I wrote a Bash script to automate the task.

gifs=`find . -iname '*.gif'` echo "Queuing $(echo "$gifs" | wc -l) gif animations to be converted to png spritesheets. Queued images may take a while to process in the background." echo "$gifs" | while read gif; do png=${gif/.gif/.png} #convert *.gif filename to *.png filename. #echo queued "$gif" # Explanation of montage command: # "$gif" \ # -tile x1 -geometry +0+0 \ #Set up the tiles. # -border 1 -bordercolor \#F9303D -compose src -define 'compose:outside-overlay=false' \ #Draw a 1-px red border around the image, so it's easier to find frames. -compose is needed to make the border not fill in the transparent pixels in the image, and -define is needed to make the -compose not erease the previous gif frames we're compositing as we draw each subsequent one. # -background "rgba(0, 0, 0, 0.0)" \ #Set the background to stay transparent, as opposed to white. (-alpha On seems to have no effect) # -quality 100 \ #The default quality is 92, but since we're dealing with pixel art we want the fidelity. # "$png" & #Run all the conversions in parallel, let the OS figure out scheduling. Replace with something smarter if things start lagging too much. montage \ "$gif" \ -tile x1 -geometry +0+0 \ -border 1 -bordercolor \#F9303D -compose src -define 'compose:outside-overlay=false' \ -background "rgba(0, 0, 0, 0.0)" \ -quality 100 \ "$png" & done
View on Github

The script loops over any gifs found, and runs ImageMagick's montage on them to convert them to a png spritesheet. The output takes into account transparency of the original image and draws a border around each frame so you can easily find the right dimensions. To use the script, run it in the root folder containing everything you want to convert. The script should work in Bash on Mac or Linux if Imagemagick is installed, but it will not work on Windows.

tags: command line, example, imagemagick, gif, png, conversion

Practical CreateJS Examples

Note: Some examples were previously discussed in the October 2013 issue of the Software Developer's Journal. Some updates were made as of August 2014.

I have found it difficult to find nice, simple examples of how to lay out an EaselJS program. The documentation and examples generally do a good job of covering the nuts and bolts of the library, but I find that some additional assembly instructions would be helpful. So, in rough order of my progression through the library, here are 5 examples:

Example 1: A Basic Scene

We'll start with a small platformer mockup, very basic. In it, we want to have some clouds float behind the player. We'll create a new HTML file, import our library, and write a simple scene. (You can grab the library I used from Other versions are available on the CDN page.)

As a first attempt, we add the ground, a cloud, and an actor. Since we're planning to add more clouds later, we've made that code a function. And – what do you know, it works! Our little cloud wafts gently across the sky, behind our actor.

(Aside: To view the code for the example, I recommend moving the tab the 'code' button opens to a new window… the popup code can be a bit flaky. When you mouse over the highlights here, they'll highlight there as well. A more reliable way to view the code might be to play the example, and then select 'view frame source' from the right-click menu. The code is also available on Github.)

Here, we've added a function to add clouds to our scene every two seconds. While the first cloud works just fine, the other clouds float over our player because they're added after the player is. An index-based solution, like "add this after the first 5 objects", will be a rolling disaster. The best way I've found so far to deal with this problem, generally, is to have named z-orders. Even giving z-orders by an arbitrary number grows difficult to manage after a while.

Z-orders are also known as layers, and fill the same role as a 3D game's depth buffer.

Luckily, CreateJS implements the best of the three systems. They just don't tell you about it. Instead, the documentation merely suggests that containers are a good way of grouping together objects that you would like to transform in the same manner. They are also a great way of grouping together objects you'd like to keep beneath other objects. At the start of the program, you can define all the z-orders that will exist as containers, and give them descriptive names. Now, as the program progresses, we can add objects into the right layer.

Here, we've added some containers to the stage, and passed those containers into the various functions that add our objects. Now, although our functions are unchanged from the previous example, we have our clouds correctly added to the middle layer.

Example 2: Caching & Performance

In many games, we use a board or background that we don't want to have move every frame. For example, we might have a scorecard that we put some text objects on, or a game where you have a few animated actors on a still background. We'll look at a dummy minesweeper game with two blue bars that track the mouse.

To build an example that stresses our computer a little, we'll make a simple game of minesweeper with each tile as an image object. Our play-field is 40² tiles, so we'll have 1600 objects to play around with. We'll lay some blue bars on top to provide interactivity for the demo.

If we play the example and open up our task manager, or shift-esc on Chrome, we can see that the game is taking up more CPU than it should be. (It uses about 90% here, and lags like crazy.) CreateJS is geared towards creating fully dynamic scenes, so unless we explicitly tell it that it can cache an object, it will render that object to canvas each frame. As a result, our simple game has to draw 1600 objects 50 times a second.

To fix this, we'll cache the background layer that is holding the tiles. Now, when we go to draw a frame to screen, we'll just draw three objects – the background, and the two blue bars on top of it. However, we will need to refresh our cache when it goes stale. The cash starts empty, so we'll add a refresh call when we've loaded the tile image we need to update it on line 31. The only time the player will invalidate the cash is when they click on a tile, so we'll add in a similar call to refresh the cache when the tile's "clicked" image is ready to be drawn. (If we had used PreloadJS, we wouldn't have to wait to update the cache here.)

Now, making sure we stop the first example before measuring the CPU usage of the second, we find that our game only uses a little bit of our available CPU. My edition of Chrome reports about 15% usage, and I don't experience the lag that the first version had.

Many older games drew the mouse cursor on screen in the manner we draw the blue bars here. This was because the common operating systems at the time could only render a simple black-and-white image as a custom cursor. This introduced around one to three frames of delay between moving the mouse and seeing the results, depending on how many steps the rendering pipeline was. This was barely noticeable when the game ran fast, but when you dipped down into 20 or 30 fps moving the mouse became a bit of a speculative operation. Today, most everything (including Javascript, via CSS) supports full-colour custom cursors.

Update: Sebastian DeRossi, from the CreateJS team, has sent me a reworked version previous minesweeper example!

To be honest, it has been about half a year since I wrote this and he replied and I got around to trying to figure out what was happening. I don't really know what to make of this, code-wise… It's a remarkably different way to do things, and probably quite a bit more robust!

Example 3: Resizing

A proper web-based game ought to be able to resize gracefully with the browser, especially on mobile where a game might be flipped around as the device rotates.

Here, we'll draw a rotating circle. We'll be able to see the redraw area, and any extra logic frames that are caused by our changes. If we drag the corner, we can see a bit of lag and flickering as the resizing logic clears the canvas before a repaint occurs. There is a bit of messiness surrounding the resizing, because there is no event fired when a DOM element changes size.

To get rid of the resizing flicker, we just have to re-render the stage when the element is resized. This is easily accomplished by a single call to the stage's update method on line 64.

Example 4: Working with the DOM

When making an HTML5 game, it is a good idea to actually use traditional HTML5 to lay out the interface for your game. Layout is, after all, what HTML5 was born for.

In this example, we'll use the titular character from Frogatto & Friends, and display a small interactive message.

When our character is clicked on, we register an event handler for the mouse move event so we can drag Frogatto around. When that triggers, we draw him and then recalculate the position of the speech bubble. (If we don't do both these at once, it will look strange as the speech bubble will move across the screen at a higher framerate than Frogatto will.) To extract the positioning information for the HTML speech bubble, we'll use the player's localToGlobal function and specify an offset for the speech bubble to be at.

Example 5: Reading from Canvas

While CreateJS is nice, sometimes you want to "drop down to the wire"… so to speak… and work with the raw canvas itself. CreateJS imposes some restrictions on how we can do this – we can't just interject arbitrary commands into our drawing chain. We can, however, read from or write to a cached canvas. In the following example, we'll write a minimalistic game to use as the seed data, and then write a droplet pixel effect in Javascript.

To put together our scene, we'll define a forest background via CSS because it is very simple to do it that way. We also position the game canvas on top of everything else, because we want to be able to put some lighting effects over our droplet output.

After we set up our stage, the next segment of our program defines the CreateJS spritesheet that we'll use as our player. We then create a new player object itself from the spritesheet.

On line 74, we have a function that describes, in the form f(x), the lowest point y we can be at. When we move Frogatto, we use this to make sure he's never lower than the ground. The key down and key up event listeners set velocity and orientation of the player.

Now that we have our minimalistic platformer up and running, we'll add a pixel effect to the black bit underneath our feet. This is covered by the second canvas, and won't use CreateJS.

Here, we've added a section of Javascript to the end of our file. Every five frames it copies the last line of our CreateJS-controlled canvas to the first line of the background canvas. We can read from any cached object in CreateJS this way. The stage itself is always cached to canvas, since that is what is displayed on screen.

This appears to work as it should. The next bit… not so much. While it still runs and demonstrates the point of the tutorial, it does not do so correctly, and I can't figure out why. When I run the program, I see some blooms of colour coming in from the bottom of the image – but there should be nothing of interest seeded there. (It's also rather slow, so no street cred for me in that department either.)

Due to the highly repetitive, cpu-bound nature of the shader effect, I've pulled out many variables for caching and ease of manipulation. Of particular note is the dataSize variable, which ensures we don't have to look up an expensively-derived length property in an important loop.

There are a few options when it comes to looping over an array in Javascript. In order of speed, from slowest to fastest: → array.forEach → reversed for-loop → for-loop. However, I advise using map and forEach when possible, because there are less moving parts to mess up compared to a for-loop.

The way our pixel effect works is fairly simple, in theory. In the typed array that contains our image data, we have each channel of each pixel of each line of our image, all one after the other. Our code here looks forward/back one row, then one pixel, and then both. It takes the maximum value it finds. It then subtracts 2 from that value, to make our effect disappear after a while. The result is written to a new array so it doesn't interfere with calculating the next pixel. This doesn't wrap around to 255 after 0, because we are writing to a Uint8ClampedArray. Since Javascript lets us read from outside the defined array, our code will simply fail for the first and last lines, returning NaN to the Uint8ClampedArray. This is converted to 0, so the first and last rows of our image will be black.


I hope this provides some examples of good architecture in CreateJS. In some cases, I've chosen simplicity over correctness. I feel this is an acceptable trade-off, because it is much easier to correct something in-place than it is to rearchitect it. Good luck with your project!

tags: demo, interactive, createjs, article, example, web dev, html5

Diagrams and Charts of Game Mechanics

Computer games are, in broad terms, systems of exchange. The player can exchange gold coins for a fancy sword; time and attention for progress; or the player can sit back and watch the game exchange stuff with itself. By graphing interactions between the player and bits of the game, we can see what the focus of the play of the game will be. Lets start off by looking at the simplest example there is, Progress Quest.

One relationship, here: time positively influences progress through the game.
Progress Quest: Time is exchanged for progress, no effort on behalf of the player required. If a repetitive task can be optimized, it should be, right?

The player is not noted on this graph, as the player is only involved during the setup of the game. Like big O notation, our graphs will disregard one-time costs.


A rather more interesting example is Open Transport Tycoon Deluxe. As with many games, you are limited in your actions by your budget. You can actually earn enough money that it becomes a non-issue, however, at which point you are limited simply by the amount of time you can actually put in to building new routes. (Side note: This presents a curious inversion to Zynga games such as Farmville, which progress slower as you progress. OpenTTD accelerates the pace as the game goes on.) The chart below describes one facet of the game, the routes you have your vehicles haul goods over to make money.

A complex graph of relationships in OpenTTD.
OpenTTD: An unsubtle graph, containing the loudest relationships.

A route can (basically) be graded on a handful of characteristics; straightness, decongestion, infrastructure, and potential. To examine straightness: First, it takes more effort to make a straight route. Other things have to moved out of the way. So more effort positively affects straightness. It costs money to (re)move things, so straightness negatively affects money. However, it positively affects decongestion, which (following the arrows) leads to vehicles having higher speed, which leads to progressing along the route faster, which leads to more money. So, straightness is an investment which will return long-term. Since the interaction graph is very circular like this, we can conclude that OpenTTD is a game of investments, with a strong focus on money.

Or, say we have a length of railway that is not congested yet. Our effort could be better spent somewhere else, even though the decongestion of the route would decrease. The route would have less future potential to generate revenue, since trains would progress slower along the route as they got in their way. We could spend that effort making the route longer, or making it straighter.

Alternatively, we might have to wait for some time to pass, and for existing investments to pay off.

Sim City

In Sim City 4, we would have a lot of arrows pointing to and from the transit system of our city. Take the power system: the power system basically goes currency → power plant → city growth → currency. The water system works exactly the same way, with no other significant effects. Transit systems, however, are in demand based on terrain, where stuff has been zoned, and what neighbours you have. Their ability to supply the demand is lowered when demand is high. They directly generate currency, in addition to consuming it. They're comprised of roads, rails, airports, subways, and so on. The partial failure of transit systems can lower or raise demand on all systems, sometimes spectacularly so – in addition to positively or negatively affecting city growth! In terms of city infrastructure, you will put down more transit tiles than other infrastructure combined.

Since there are no other systems which interact deeply as transit does, we can conclude the Sim City 4 revolves around transit. This is a bit of thematic clash with the presentation of the game, which is to build a city. By charting relationships before building a game, we can foresee such problems and know what steps to take to avoid them.

Every version of Sim City, and most other earlier Maxis games, adhere to a effort-time-currency pyramid. You can put effort in, if you don't have to wait for a period of time to get enough currency to fund your efforts. This is a very popular model for simulation-genre games, and is seen in such games as Open TTD, the Tropico series, and YooT Tower. (YooT Tower is a sequel to the more popular Sim Tower.)

A Dark Room

I found an interesting game a few days ago. It is very well designed, and can be played at Doublespeak Games' site. The game takes a fairly extreme approach the graph of interactions. It starts with one. You move up from there. (This is the polar opposite of Dwarf Fortress.) A Dark Room is ridiculously easy to learn, and it stays fresh throughout the game as the number of interactions expands.

A simple graph of relationships in A Dark Room.
Stage 1
A less simple graph.
Stage 2
An even less simple graph.
Stage 3
And so on.
Stage 4

And that – that is powerful stuff. Portal (the first version, at least) played the same game, to great effect, and enjoyed great success. Progress is a powerful thing!

tags: game design, open ttd, sim city, a dark room, game theory, theory

The Start

This is the start of the blog. That's all there is, there is no more.