Rigel Group

They shoot Yaks, don't they?

Test Objects for Blankness With This One Little Trick!

Rails adds a lot of syntactic sugar to plain old Ruby. A lot of this is wrapped up in the ActiveSupport module. If you work with Rails, and have not read ActiveSuport core_ext from cover to cover, stop now, and go do it. Go on, I’ll wait.

OK, I sense a few lightbulbs coming on out there! “So thats why …”

It seems most Rails folks know about (and use) Object#blank? and Object#present?. These are handy ways to test if something is, well, blank or not. Ruby thinks nil if false, which is great, but when dealing with web apps and user entered data, you might get an empty string or a bunch of space characters, which you also want to think of as false, perhaps so you can set a default value or something. Continually having to check for different types of blankness is annoying, so instead you can use Object#blank?

1
2
3
user.country = params[:country].blank? ? 'USA' : params[:country]
# or, alternatively
user.country = params[:country].present? ? params[:country] : 'USA'

Now, that still looks a bit ugly, to a rubyist’s eyes, so we can use Object#presence (github) to clean it up even more:

1
user.country = params[:country].presense || 'USA'

#presence will either return the value if it is not blank, OR nil. This also works with arrays and hashes as well.

Its a small thing, but neatly encapsulates a common pattern, and isn’t that pretty much what we get paid to do all day?

Frequently Used Commands

If you spend a lot of time at the *nix command line, you probably use your shell history quite a bit to avoid typing commands in over and over. Or maybe you define shell aliases for your most precious incantations. Shell history is great, and I have mine set to save a bajillion entries, ‘cause we have more than 640k these days. But even with a nearly infinite shell history, I find myself wanting to know what are the few important commands I usually need in the specific directory I happen to be in.

Frequently Used Commands (fuc)

So, this little shell script will let you save off the important commands to a .fuc file in the local directory, and then easily recall those commands in a little menu.

Let’s say you are working away, and you want to remember a particular command:

1
$ fuc gulp --require coffee-script watch

Which will save the command to the .fuc file, and also run it. Later, you can just type fuc from the same directory, and get this:

1
2
3
4
5
6
$ fuc
fuc - Frequently Used Commands
1) gulp --require coffee-script watch
2) echo "(╯°□°)╯ ┻━┻"
3) cat log/server.log | grep ERROR
Select command to run (q to Quit):

What I really wanted to do was put the selected command on the command line, ready to be edited, then hit enter to run it. But I couldn’t be bothered to figure out how to do that. So selecting a command just runs it immediately.

(I know there are a lot of much more complicated implementations that tweak your actual command history to be directory-specific, but I wanted to be able to specify which commands were the important ones, and just remember those.)

Here is the script. Put this somewhere in your path, and name it fuc, or if that’s too NSFW than I’m sure you can come up with a better name. Maybe dangit or something.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
#!/bin/bash

# Tell cat that we want to break on newlines not spaces
IFS=$'\r\n'

PS3="Select command to run (q to Quit):"

if [ $# -eq 0 ]
then
  echo "fuc - Frequently Used Commands"
  cmdList=$(cat .fuc | sort -u)
  select cmdName in $cmdList; do
    if [ -n $cmdName ]; then
      echo $cmdName
      eval ${cmdName}
      break
    fi
  done
  exit 1
fi

echo Saving and executing command: $*
echo $* >> .fuc
eval $*

React JSX Transformer in Rails Middleware

Recently I been having a blast playing around with React, and I found this neat hack from @ssorallen called Reactize.

What he is doing is grabbing the HTML response from the server, and then in the browser running the JSXTransformer on the HTML, and mounting the whole document body as a React component. Very clever!

So to riff on that theme a little bit, here is a Rails middleware that will take the HTML page the server was going to send to the client, and replace it with the JSXTransformed version, which is basically a javascript snippet. So the “heavy-lifting” of the JSXTransformer is done server-side.

Another thing we can do is hash the result and throw it in the Rails cache, so we arent doing more work than we need to.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
class JsxMiddleware
  def initialize(app)
    @jsxcode = File.read("#{Rails.root}/app/assets/javascripts/JSXTransformer.js")
    @app = app
  end

  def call(env)
    status, headers, response = @app.call(env)
    if env['HTTP_X_JSX'].present?
      response.body = convert_to_jsx(response.body[/<body>(.*)<\/body>/m,1])
      headers['X-JSX'] = 'true'
    end
    [status, headers, response]
  end

  def jsx_context
    # Use a Thread-local variable to store the JS context, with the JSXTransformer code loaded.
    # That way each thread will have its own and we are thread-safe.
    Thread.current[:jsx_context] ||= begin
      ExecJS.compile("global={};" + @jsxcode)
    end
  end

  def convert_to_jsx(html="")
    snippet = "/** @jsx React.DOM */\n" + html
    hash = Digest::MD5.hexdigest(snippet)
    Rails.cache.fetch "jsx:#{hash}" do
      jsx_context.call("global.JSXTransformer.transform", snippet)['code']
    end
  end
end

So on the client, have a link that looks like <a href="/thepage" data-behavior="getViaJSX">Click Me</a> and you could do something like this to request a JSXTransformed page…

1
2
3
4
5
6
7
8
9
10
$ ->
  $("[data-behavior='getViaJSX']").on "click", (e) ->
    e.preventDefault()
    $.ajax
      url: e.target.href
      # The middleware only kicks in if this header exists
      headers: {'X-JSX': true}
      success: (data) ->
        component = eval(data)
        React.renderComponent(component, document.body)

Another option would be to bake it in to TurboLinks itself by patching it to make the request with the X-JSX header.

That’s it! Not sure exactly what it is good for, but a fun exercise anyway.

React JS Roundup

I recently gave a talk on React at FullStack, which is the coolest meetup North of the Wall. You can find the slides here.

React is a game-changer, and will fundamentally change the way we build apps on the web. To learn more about it, here are some great resouces:

Rethinking Best Practices

Pete Hunt gives a talk introducing React at JSConf EU 2013. slides

Functional DOM Programming

A blog post by Pete Hunt which explains the basics of React components.

Real-time frosted glass effect on mobile

Great walkthrough on how to create fast animations on modern mobile devices.

React + TurboLinks

Fun proof-of-concept where Rails TurboLinks is married with React with some mind-bending results.

Om

Take the red pill, and check out Om, which is the combination of ClojureScript with its immutable data structures, and React. This is where things get really exciting! (Follow David Nolen for more interesting tidbits.)

Makona, the Block-Style Editor

Because the world desperately needs another Rich Text editor for the web, I give you Makona. Makona is the Hawaiian word for a mason, which is someone that works with blocks. Makona lets you edit text by working with blocks — Markdown blocks, text blocks, image blocks, code blocks, etc.

The blocks can be saved back to the server as a blob of HTML, or in a JSON structure that contains all the data for the blocks, which opens up some neat possibilities for reusing that content in different contexts.

This project was started mainly as a excuse to learn Reactjs, but with some elbow grease I think it can become a useful tool in the open-source universe. Feel free to pitch in!

Using React.js With CoffeeScript

If you haven’t heard of the latest front-end hotness, head on over to React.js and prepare to have your mind blown. Brought to us by the fine folks over at Facebook, it presents a new take on building browser apps. Once you get your head around it, it really makes a lot of sense, and especially if you need something lightweight to add JavaScript components to your existing site. It doesnt have things like routers or data models. It just concerns itself with building interactive view components in a highly composeable (and performant!) way.

But, the examples are all in POS (Plain Ole JavaScript), which is a problem for me. I much prefer CoffeeScript. And due to the weirdness that is JSX, it is not easy to get React to work with CoffeeScript out-of-the-box.

The first thing to note is that you need to get your workflow pipeline set up correctly. Because, you will need to compile your CoffeeScript to JSX-style JavaScript, and then compile your JSX-style JavaScript to regular old JavaScript. (I know, it sounds crazy and I wouldn’t blame you if you bounced right now. But if you stick with me, enlightenment will come.)

I set up a Grunt workflow that does this, and the relevant parts of the Gruntfile.coffee look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
module.exports = (grunt) ->
  grunt.initConfig
    pkg: grunt.file.readJSON("package.json")

    srcDir: "./src"
    testDir: "./test"
    outputDir: "./dist"

    # Compile to JS first, then we will compile the JSX in another task and move to /dist
    coffee:
      options:
        # This is IMPORTANT, because the first line has to be a JSX comment
        bare: true
      all:
        files: [
          expand: true
          cwd: 'src/'
          src: ['**/*.coffee']
          dest: 'src/'
          ext: '.js'
        ]

    react:
      all:
        files:
          "<%= outputDir %>": "<%= srcDir %>"

    regarde:
      coffee:
        files: "<%= srcDir %>/**/*.coffee"
        tasks: ["coffee", "spawn_react"]

    # Set up a static file server
    connect:
      server:
        options:
          hostname: "0.0.0.0"
          port: 9292
          base: "."
          keepalive: true

    # Clean up artifacts
    clean:
      output: "<%= outputDir %>"

    # Execute server script
    exec:
      server:
        cmd: "./server.js"

  grunt.loadNpmTasks "grunt-contrib-coffee"
  grunt.loadNpmTasks "grunt-regarde"
  grunt.loadNpmTasks "grunt-contrib-connect"
  grunt.loadNpmTasks "grunt-contrib-clean"
  grunt.loadNpmTasks "grunt-exec"
  grunt.loadNpmTasks 'grunt-react'

  # Make sure we get an error on compilation instead of a hang
  grunt.registerTask 'spawn_react', 'Run React in a subprocess', () ->
    done = this.async()
    grunt.util.spawn grunt: true, args: ['react'], opts: {stdio: 'inherit'}, (err) ->
      if err
        grunt.log.writeln(">> Error compiling React JSX file!")
      done()

  grunt.registerTask "server", ["exec:server"]
  grunt.registerTask "build", ["coffee", "spawn_react"]

You will also need the server.js file, which is here:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
#!/usr/bin/env node

var spawn = require("child_process").spawn,
    watcher = spawn("grunt", ["regarde", "--force"]),
    server = spawn("grunt", ["build", "connect:server"]);

watcher.stdout.on("data", function(data) {
  var importantOutput = data.toString().split("\r?\n").filter(function(str) {
    return />>|Done|Warning|Running/.test(str);
  });

  process.stdout.write(importantOutput.join("\n"));
  // process.stdout.write(data);
});

server.stdout.on("data", function(data) {
  process.stdout.write(data);
});

watcher.on("exit", function(code, signal) {
  server.kill();
  process.exit();
});

server.on("exit", function(code, signal) {
  watcher.kill();
  process.exit();
});

process.on("exit", function() {
  watcher.kill();
  server.kill();
});

Now you can do a grunt server and start writing React code.

Here are some (contrived) code snippets that might help you out if you are struggling with how to reconcile React with CoffeeScript syntax. The secret is to shell out to JavaScript with the ` operator when necessary, so the code is intact when JSX transpiles it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
`/** @jsx React.DOM */`
# The above line HAS to be the first line in the file for JSX to know to process it.
MySimpleComponent = React.createClass
  render: ->  `<pre>{this.props.mytext}</pre>pre>`

MyComponent = React.createClass
  render: ->
    `(
      <ul>
        {this.props.items.map(
          function(item){
            return (
              <li><a href="#" onClick={_this.props.handleClick}>{item}</a></li>
            )
          }, this)
        }
      </ul>
    )`

A big thanks to Facebook and everyone who worked to bring this project to life. I look forword to using it in my projects.

[UPDATE] Vjeux has blog post about how to actually use CS instead of shelling out to JS.

Exporting SharePoint User Profiles to CSV Using Powershell

You may have the (mis-)fortune of working with Sharepoint, and you may also need to gain access to the User Profile data contained therein. You may also want to try out Microsoft’s PowerShell scripting language. If so, you came to the right place, my friend!

This seems like a common enough task, but the code I found in my Googling just wasnt doing it for me, so I am adding this version to interwebs in the hopes someone else will find it useful.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
#
# Export Sharepoint User Profiles to CSV file
# John Lynch 2013
# MIT License

$siteUrl = "http://YOUR_HOSTNAME_HERE"
$outputFile = "c:\temp\sharepoint_user_profiles.csv"


Add-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue
$serviceContext = Get-SPServiceContext -Site $siteUrl
$profileManager = New-Object Microsoft.Office.Server.UserProfiles.UserProfileManager($serviceContext);
$profiles = $profileManager.GetEnumerator()

$fields = @(
            "SID",
            "ADGuid",
            "AccountName",
            "FirstName",
            "LastName",
            "PreferredName",
            "WorkPhone",
            "Office",
            "Department",
            "Title",
            "Manager",
            "AboutMe",
            "UserName",
            "SPS-Skills",
            "SPS-School",
            "SPS-Dotted-line",
            "SPS-Peers",
            "SPS-Responsibility",
            "SPS-PastProjects",
            "SPS-Interests",
            "SPS-SipAddress",
            "SPS-HireDate",
            "SPS-Location",
            "SPS-TimeZone",
            "SPS-StatusNotes",
            "Assistant",
            "WorkEmail",
            "SPS-ClaimID",
            "SPS-ClaimProviderID",
            "SPS-ClaimProviderType",
            "CellPhone",
            "Fax",
            "HomePhone",
            "PictureURL"
           )

$collection = @()

foreach ($profile in $profiles) {
   $user = "" | select $fields
   foreach ($field in $fields) {
     if($profile[$field].Property.IsMultivalued) {
       $user.$field = $profile[$field] -join "|"
     } else {
       $user.$field = $profile[$field].Value
     }
   }
   $collection += $user
}

$collection | Export-Csv $outputFile -NoTypeInformation
$collection |  Out-GridView

Validating SAML Tickets in JRuby Reduex

A while back I had the pleasure o_O of implementing SAML in JRuby. At that time I was working with Java1.7.0u17, and all was right with the world.

Recently I wanted to upgrade to Java1.7.0u40, and the Validate class stopped working, and threw this error:

Exception:javax.xml.crypto.URIReferenceException: com.sun.org.apache.xml.internal.security.utils.resolver.ResourceResolverException: Cannot resolve element with ID _673ef297-23ab-428c-8e11-7fed395a7daf

Hmm. Something has obviously changed. A Google session later, this bug report points me in the right direction. It turns out that Java used to assume any XML node with an attribute named “ID” was in fact an ID node and could be found with getElementById. But newer versions conform more closely to the XML spec and require the node to be “tagged” as an ID node via a schema.

OK, so we change the code (original here) to apply the correct schema:

Validator.java

1
2
3
4
5
6
7
8
9
10
// Snip...

SchemaFactory schemaFactory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI);
Schema schema = schemaFactory.newSchema(new URL("http://docs.oasis-open.org/security/saml/v2.0/saml-schema-protocol-2.0.xsd"));
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
dbf.setNamespaceAware(true);
dbf.setSchema(schema);
Document doc = dbf.newDocumentBuilder().parse(new InputSource(new StringReader(samlResponse)));

// Snip..

This actually works and validates the SAML XML response, but it takes 30+ seconds to do it. Maybe thats because its trying to grab the schema from the web? So I try using a local copy, and still it takes 30+ seconds to run. Drat.

Since there is more than one way to shave a yak, instead of using a schema, you can also programmatically tag nodes to be ID nodes. So lets see what that looks like:

Validator.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
// Snip...

DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
dbf.setNamespaceAware(true);
Document doc = dbf.newDocumentBuilder().parse(new InputSource(new StringReader(samlResponse)));

// Loop through the doc and tag every element with an ID attribute as an XML ID node.
XPath xpath = XPathFactory.newInstance().newXPath();
XPathExpression expr = xpath.compile("//*[@ID]");
NodeList nodeList = (NodeList) expr.evaluate(doc, XPathConstants.NODESET);
for (int i=0; i<nodeList.getLength() ; i++) {
  Element elem = (Element) nodeList.item(i);
  Attr attr = (Attr) elem.getAttributes().getNamedItem("ID");
  elem.setIdAttributeNode(attr, true);
}

// Snip..

Viola! This works, and of course its super fast.

DevOps for People Who Hate DevOps

DevOps started out as scrappy developers who were just trying to Get Shit Done. They needed a couple of machines set up, and thought, “Hey, I am programmer. Why don’t I program the machine to set itself up!”. Viola! Genius, I tell you! But then, sadly, DevOps grew to become a Movement, and was co-opted by System Administrators and large commercial enterprises looking for something complicated to sell. The tools (Puppet, Chef, Im looking at you) became Rube Goldberg contraptions orders of magnitude more complicated than the little old LAMP stack we were trying to setup in the first place. Now you need DevOps infrastructure to manage your infrastructure. DevOps took the place of SysAdmins, and the average developers were left behind.

Then, like a breath of fresh air, Ansible came on the scene. Billed as a dead-simple DevOps system that relies on nothing more than SSH, it was love at first sight. Unfortunately, Ansible, it seems, has succummed to the siren song of the Enterprise, and if you were to look at the AnsibleWorks web site today, you would weep at the amount of marketing techospeak that has been strewn about. But fear not! The core of Ansible has not changed, and if you can wade past the BS you will find a jewel that may become the sharpest tool in your belt.

So, if we break down Ansible to it’s core, it is essentially a way to script an SSH session to a server. There are no prerequisites for the server. As long as you can reach it via SSH, and it has Python installed, Ansible can manage it. The magic sauce that Ansible brings to the party, is that it’s playbooks are YAML files that declaritively specify how the machine should look, and Ansible will do whatever needs to be done to make the machine look like that. So running Ansible is idempotent, you can run the same playbook against a machine multiple times, and if everything has already been done, it won’t do it again.

Let’s get started, for OSX follow along:

Make sure you have homebrew installed, and then

1
$ brew install python

This gives you a nice local version of python we will install Ansible into.

1
2
3
4
5
$ pip install jinja2
$ pip install PyYAML
$ pip install paramiko
$ pip install boto
$ pip install ansible

will have an ansible command to use. We can test it out by doing this (on OSX, make sure you have Remote Login checked in System Preferences –> Sharing so that you can SSH into localhost):

1
2
3
4
5
6
7
$ echo "[local]\n127.0.0.1 ansible_python_interpreter=/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python" >> ~/ansible_hosts
$ export ANSIBLE_HOSTS=~/ansible_hosts
$ ansible all -m ping --ask-pass     #this will fail if you dont have an SSH server running on localhost.
127.0.0.1 | success >> {
    "changed": false,
    "ping": "pong"
}

launch_instance —ami=ami-bfd3a3d6 —type=m1.small —key=ansible-ec2-us-east —dns —groups=Web —region=us-east-1 ~/.boto

Now, we need a herd of boxen to manage, so for that grab your Amazon EC2 keys, make sure you have the EC2 command line tools installed and configured, and saddle up.

First, we are going to create a playbook, which is a YAML file that describes what we want done. This first playbook is a bit of a mindbender, because we are going to script Ansible to log into our local machine, and then run EC2 scripts to provision and boot EC2 instances. Once we have those, we can then use Ansible to manage our newborn servers.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
# Ansible Playbook to create and manage EC2 servers

- name: Provision servers
  hosts: local
  connection: local
  user: john
  gather_facts: false

  tags:
      - provision

  vars:
      keypair: ~/ansible_ec2.pem
      instance_type: t1.micro
      security_group: Web
      # bitnami-cloud-us-west-2/lampstack/bitnami-lampstack-5.4.11-1-linux-ubuntu-12.04.1-x86_64-s3.manifest.xml
      image: ami-0021ab30
      instance_count: 2

  # Provision 2 servers...

  tasks:
    - name: Launch server
      local_action: ec2 keypair=${keypair} group=${security_group} instance_type=${instance_type} image=${image} wait=yes count=${instance_count}
      register: ec2

    # Use with_items to add each instances public IP to a new hostgroup for use in the next play.

    - name: Add new servers to host group
      local_action: add_host name=${item.public_dns_name} groups=deploy
      with_items: ${ec2.instances}

    - name: Wait for SSH to be available
      local_action: wait_for host=${item.public_dns_name} port=22
      with_items: ${ec2.instances}

    - name: Wait for full boot
      pause: seconds=15

# Now, configure our new servers

- name: Configure servers
  hosts: deploy
  user: ubuntu
  sudo: yes
  gather_facts: true

  tags:
    - config
    - configure

  # Install Java, install Elasticsearch, replace settings....

  tasks:

    - name: Install JRE
      apt: pkg=openjdk-6-jre-headless state=latest install_recommends=no update_cache=yes

    - name: Download ElasticSearch package
      get_url: url=http://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-0.90.0.deb dest=~/elasticsearch-0.90.0.deb

    - name: Install ES .deb file
      shell: dpkg -i ~/elasticsearch-0.90.0.deb
      notify: restart elasticsearch

    - name: Install cloud-aws plugin
      shell: /usr/share/elasticsearch/bin/plugin -install elasticsearch/elasticsearch-cloud-aws/1.11.0
      notify: restart elasticsearch

    - name: Make elasticsearch config dir
      file: path=/etc/elasticsearch/ state=directory

    - name: Copy over Elasticsearch settings
      copy: src=./elasticsearch/elasticsearch.yml dest=/etc/elasticsearch/elasticsearch.yml
      notify: restart elasticsearch

  handlers:
    - name: restart elasticsearch
      action: service name=elasticsearch state=restarted

Hacking the (Minecraft) Matrix With JRuby

If you have children of a certain age, or are a child at heart yourself, you have probably come across Minecraft, a wonderful game that proves that gameplay and creativity can still trump fancy graphics, explosions, and photoreal environments.

Minecraft is all about building things, but you build in the game, using only the tools that the game’s creator gives you. After having mastered the game from the inside, my sons wanted to see the matrix. They wanted to know how it worked, and change it, and come up with their own tools and their own rules. So, after poking around a bit, I discovered that Minecraft is written in Java, and there is a huge community of people who mod the game. But Java is not the best language to teach my 9 year old, so a bit more digging brought me to this awesome project by one the JRuby guys, Purugin. This lets you program Minecraft using Ruby, which sounds like a whole lot of fun, so let’s get started.

(These instructions assume you are on OSX. The same ideas should translate over to Windows as well.)

First, go to Minecraft and download (and purchase) the desktop client, and get that working on its own.

Next, we need to get the CraftBukkit server, which will hold our world and our custom code, and we will eventually connect our clients to this server.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
mkdir ~/Code/CraftBukkit
cd ~/Code/CraftBukkit
curl -L http://dl.bukkit.org/downloads/craftbukkit/get/02084_1.5.1-R0.2/craftbukkit-beta.jar > craftbukkit.jar
echo "cd ~/Code/CraftBukkit" > start.sh
echo "java -Xms1024M -Xmx1024M -jar craftbukkit.jar -o true" >> start.sh
chmod +x start.sh

mkdir plugins
cd plugins
curl -LO http://dev.bukkit.org/media/files/675/889/purugin-1.4.7-R1.0.1-bukkit-1.4.7-R1.0-SNAPSHOT.jar
curl -LO https://github.com/enebo/Purugin/raw/master/examples/generators/cube.rb
curl -LO https://github.com/enebo/Purugin/raw/master/examples/generators/tunnel.rb
curl -LO https://github.com/enebo/Purugin/raw/master/examples/purogo.rb
mkdir purogo
cd purogo
curl -LO https://github.com/enebo/Purugin/raw/master/examples/purogo/tower.rb
curl -LO https://github.com/enebo/Purugin/raw/master/examples/purogo/pyramid.rb
curl -LO https://github.com/enebo/Purugin/raw/master/examples/purogo/star.rb
curl -LO https://github.com/enebo/Purugin/raw/master/examples/purogo/cube.rb

Now, let’s start the server:

1
~/Code/CraftBukkit/start.sh

You should see the server start up. Make sure you see lines like this in the output, which indicate that the JRuby plugins loaded OK:

1
2
3
4
5
6
7
08:31:54 [INFO] [PuruginPlugin] Loading PuruginPlugin v1.4.7-R1.0.1
08:32:05 [INFO] [PuruginPlugin] Enabling PuruginPlugin v1.4.7-R1.0.1
08:32:05 [INFO] [Cube Generator] version 0.2 ENABLED
08:32:05 [INFO] [purogo] version 0.2 ENABLED
08:32:05 [INFO] [Tunnel Generator] version 0.1 ENABLED
08:32:05 [INFO] Done (8.607s)! For help, type "help" or "?"
>

At this point, you can run the Minecraft client App, and say Multiplayer –> Direct Connect to localhost, and you should connect to our server.

As you are playing Minecraft, you can issue commands by typing /, so lets try our first command by typing this in the Minecraft client:

1
/cube 5 5 5

This will create a 5-block cube in front of you. The code that made that happen is here

cube.rb

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
class CubeGenerationPlugin
  include Purugin::Plugin
  description 'Cube Generator', 0.2

  def on_enable
    public_command('cube', 'make n^3 cube of type', '/cube {dim}') do |me, *args|
      dim = error? args[0].to_i, "Must specify an integer size"
      error? dim > 0, "Size must be an integer >0"
      type = args.length > 1 ? args[1].to_sym : :glass
      z_block = error? me.target_block.block_at(:up), "No block targeted"

      me.msg "Creating cube of #{type} and size #{dim}"
      dim.times do
        y_block = z_block
        dim.times do
          x_block = y_block
          dim.times do
            x_block.change_type type
            x_block = x_block.block_at(:north)
          end
          y_block = y_block.block_at(:east)
        end
        z_block = z_block.block_at(:up)
      end
      me.msg "Done creating cube of #{type} and size #{dim}"
    end
  end
end

Any .rb file you put in the ~/Code/CraftBukkit/plugins directory, will be automatically picked up and available for you to call.

So, that’s great, and it’s Ruby, which is a bit nicer (IMHO) than Java, however it’s still a bit beyond my 9-year-old son. Luckily, there is also a simple Logo implementation available to us, so that in Minecraft we can type

1
/draw tower

and you should see a chicken drawing a tower in front of you. The code that does this, is

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
turtle("tower") do
  # Draw base of cube
  square do
    4.times do |i|
      mark i
      forward 5
      turnleft 90
    end
  end

  pillars do
    4.times do |i|
      goto i
      turnup 90
      forward 5
    end # Still at top of last pillar
    turndown 90
  end

  3.times do
    square
    pillars
  end
  square
end

Ahh, a much nicer syntax for a child to grasp, and they can see their creation come to life in front of them, which is highly motivating.

Any .rb file (written in the Logo-ish syntax) you put in ~/Code/CraftBukkit/plugins/purogo will be available to call with the /draw command.

So, we have barely scatched the surface of what can be done, but it is wonderful to see a child’s eyes light up the first time they “hack the matrix” and write code that creates something they can actually see in the Minecraft world. Many thanks to Tom Enebo for creating Purugin, and hopefully sparking an interest in programming in our children.