Development Team
Currently, the development team consists of me:
Docent Dr. Manfred G. Grabherr
I have been programming since I was 13. Not that I wouldn't have
done it earlier, but back then, it was difficult to get a hold of a
computer. Unsurprisingly, the first complete software program that
my friends and I wrote together was a game. A 'physics game',
actually, in which the player had to land a multistage rocket on the
moon, manually controlling the entire flight and getting constant
feedback from a rather simple physics engine. No graphics involved,
and in the end, none of us ever managed to land safely on the moon.
Which might have been not too bad after all, since we never planned
for any return trip. That would have taken us another couple of
months, since we could only use the computer on Wednesday afternoons
at school.
There is something incredibly fascinating about the combination of
the latest advances in science, computer animation, and modeling
reality, be it physics, graphics, sound, or behavior. It brings
together very different aspects of human endeavors; exploration,
achievement, interaction, competition, and perhaps most importantly:
the awe of seeing something beautiful work. Granted, these
elements can also be seen as integral parts of any software, but
what sets games apart is that we play them for the pure fun of it,
for the experience alone. While we do that, we learn many things
about the world - and perhaps about ourselves as well - without even
noticing, which predestines this approach for a number of
educational purposes as well. And, having been an engineer and
scientist for decades, it is a pleasure to watch kids teaching
themselves Java
or Lua so that they
can write mods
for sandbox games such as Minecraft
or Garry's Mod. These might
be the engineers and scientists of the future, and I think the
outlook is more than bright.
But for now, there is one particular area in which further
development is desired. Computers have come a long way since I was
13, and now even my phone is orders of magnitudes more powerful than
the largest machines that existed back then. This opens up
possibilities that we could not have imagined only a few decades
ago, and one of them is modeling cognition. Consider the
following: in the scientific area that I am working in,
computational genomics, advances in biochemistry have made it
possible to generate enormously large data sets, for example the
whole genomes of dozens or even hundreds of individuals, or the tens
of thousands of genes that they express. To put this in perspective,
the human genome contains about 3 billion nucleotides,
represented by the letters A, C, G, and T. Multiply this number by
the number of individuals you sequence, and it is easy to see that
the amount of data is, well, large. And in there, you now want to
know which nucleotides make some crows black and
others grey, how stickleback
fish have adapted to different environments, how Darwin's
finches change their beak shapes, or what happens in the
pancreas of people with type I
diabetes.
The classic approach, which is to employ hundreds of graduate
students to manually look through the data, is not always feasible
for these data sets. This is why machine
learning, i.e. the concept of "let the computer figure it out
by itself", has become very popular within the past few years in
this area. What this approach does, in essence, is to identify
distinct patterns in large data sets, partition the space
accordingly, and, in some cases, propose models that describe the
data. Computationallly, this methodology is by no means cheap, but
with proper parallelization and a modern compute architecture, it is
very well feasible to process tons and tons of data in reasonable
time frames.
Let us now perhaps scale back the amount of data, but let us
increase the complexity. While some genomic
analysis software already has the concept of space,
what happens if we add a component that deals with time?
What we get is a system that automatically recognizes patterns in
both time and space, and if we add another component that takes into
account interaction and the consequences thereof,
then these patterns can also be qualified. In other words,
we now have a machine that remembers, learns, and
determines its own motivations. It is not only able to
follow ongoing action, but constantly comparing what is happening
with its memory allows it to anticipate the future. It projects what
will happen from its experience. Likewise, it can "think" through
possible chains of actions and choose the best - or least worst -
option, while able to revise strategy if things do not go according
to plan. In short: now we have a system that models cognition, even
if it is on a crude level compared to humans. But, if we take this
system and let it control the non-playable characters in a computer
game, even if these are just as smart as ants or chickens, then we
get something that adds a whole level to game play.
At least, this is what I am convinced of. And, as always, there is
only one way to find out.