Log in

27 July 2007 @ 09:32 pm
WEEK 2 - Pre-production (Friday)  
(Backdating this, it was mostly written on friday but I didn't quite get around to posting it until today!)

Well, we're now more than halfway through pre-production, which is a damn scary thought.

I spent most of the day in the presentation room, so I didn't get to keep tabs on what everyone else was doing as much as I usually try to. I'd say that it's part of my job of 'creative control' to oversee everything, but the truth is, I'm just curious and kind of in awe of everyone who can draw so well and actually make head or tail of Max/Ageia/C++ (Aporkalypse-style) ^_-


In the morning, Ben got the Ageia plugin working with Max 9! He created a box and it fell to a ground plane. He also discovered that the Ageia plugin exports physics data in binary form, instead of in PML. This means that we won't have to do the "pre-cook" that was required with Aporkalypse, which is definitely a benefit.

In the afternoon, when I next saw him, Ben had managed to get his basic framework reading in the data exported from Max/Ageia. A cube appeared, and it fell onto a plane! It might not seem like much, but it was very encouraging. I feel like we're actually starting to make some headway into what seemed like an inpenetrable frontier of unfamiliar programs and code. I'm even beginning to think that we might actually get the game to a decent standard by the deadline!

But Ben's away for the next week and a half (off skiing, lucky sod!) so it's up to the rest of us to pick up where he left off next Wednesday.

One issue that cropped up with Ben's experimentation is that of 3D orientation. For some very strange reason, Max seems to use Z as the vertical axis, instead of Y. This is a very odd convention - I don't know of any other program that does it like that. I asked some of the graphics students, and they said Maya doesn't do it, so it can't be just an art program thing. Anyway, it means that we're going to have to turn our brains sideways when we're looking at models, and probably put in some transformation rotations when we load everything, since Novadex and Renderware both use Y as the vertical axis.

Meanwhile, Peter and the NPC team continued work on the AI ruleset (frequently popping up to ask the design team questions on NPC behaviour that would be required).

As they started to think about actually implementing the AI, they hit what will probably be their biggest implementation issue - how to implement pathfinding nodes. The AI will use both pathfinding (navigating paths by travelling from node to node) and steering (avoiding objects in the local vicinity) behaviours, neither of which were used in Aporkalypse, so we're really working from scratch. But the biggest problem of pathfinding is how to set, store and extract the node locations and connections.

We spent a while debating various potential methods. One suggestion was to use the physics engine to store the information. Each node could be created as a physics object, and then nodes could be attached to each other using Ageia's 'joints' method. The physics loading code could be programmed to not load any physics objects with a certain prefix (ie, the pathfinding nodes), and then we could have a different loading module to extract only those particular nodes from the physics model and feed them directly into nodes in the actual code. Daniel was much in favour of this method.

The other method that Shane-the-graphics-lecturer-not-the-programmer suggested was using objects in Max to store and connect the nodes, and have a script that extracts that information and then dumps it in a format that the pathfinding loading code can read. I'm in favour of this method over the other one - it would seem sensible to me to keep each section (ie, AI and physics, which are unrelated) discrete, rather than be intertwined unneccessarily.

I'm still not sure how the objects will be connected together, though - hopefully I can corner Shane next week and get him to give me a crash course on how he plans to do it. Some of the graphics students had some interesting ideas - everything from using 'bones' to connect the node objects, to setting connected nodes in consecutive keyframes of an animation.


Well, the design has really started to come together today. We got level 1 even more fleshed out, and made a solid framework for levels 2 and 3. Now we've got all the sections in each level specified, and the order they occur. There's wiggle room, of course - we may end up discovering we've just tried to put too many things in and have to cull some. But I hope not, because I think we could do some serious justice to these ideas, if we get the time.

We finally resolved our problem with the 'bots on Belfry that are too high for Lobot to deactivate' situation. I can't claim credit, alas - Geoff came up with the idea (it was actually one he suggested ages ago, I didn't think it was usable at the time). He suggested that instead of making him invulnerable, the shield could make Belfry invisible. After mulling over this idea for a bit, we realised it could be workable. It would mean that when enemy robots see and chase/try to attack Belfry, he can flip on his shield pretty quickly and the robots will stop, look around, be unable to work out where he's gone, and will eventually wander away again, back to their normal locations or patrol routes.

We thought about the practicalities and physics of invisibility and decided to change the flavour of the shield slightly - instead of making him actually 'invisible', the shield could be a sense-scrambler that Belfry's been developing. It only works on robotic senses, so has no effect on humans (not that that's an issue with this particular game, since there are no other humans).

We did some more work on the mini-game, too, with Geoff at the helm and the rest of us making suggestions and drawing 'helpful' squiggles on the whiteboard.

And finally, at the close of the second week of pre-production, we had our first progress report to the class. Fortunately (for me, anyway, I'm terrible at public speaking) Paul gave the speech, giving a description of what we'd put together so far for each of the levels. He used our whiteboard notes to demonstrate, which was a bit of a worry - they were kind of all over the place and rather incomplete, since they were scrawled progressively as we tossed ideas back and forth over two days.

The report wasn't as detailed or as long as we had planned, but we simply ran out of time. As Paul said, so far the programming and graphics teams are pretty much on-schedule, but the design team is lagging behind. In our defense I'll say that we're only learning what we're supposed to be doing as we're doing it.

We've been trying to debate the merits of different ideas thoroughly, making sure that everything occurs in an order that works for both the story and for player progression, and so far, I think we're doing a decent job, considering our inexperience. The design team is an interesting group, because, much like the entire programming class, it's made up of very laid-back individuals. This means we don't have full-on flame wars or major meltdowns, but it does mean discussions can sometimes go on for a while.

I think in future I need to try and stay more focussed, and make sure everyone else does too. I need to set out the goals of each session more clearly (eg, "today, we're here to decide [this], and at the end of it, we'll have [this]"), as well.

But on the up side, I think we've done enough of the design that the programmers and artists can start work while we finish it up.

Mostly what's left are the specifics: the exact types of NPCs we're going to have, what the puzzles in each section are and how they work, etc. Puzzles will probably be our focus for the next week - we need suggestions! If you can think of a setup, send it my way and we'll try to work it in somehow =D


I didn't hear much in the way of news on the graphics front today. The artists were still working on their designs.

At one stage I caught Paul and Eve standing in front of the wall, picking out bits they liked from various robot designs. One of Simon's designs seems to be the frontrunner for Lobot - a robot with a teardrop-shaped head, and a posture somewhat reminiscent of Dog's (Half Life 2). Simon's robot is legged, but could be modified to use tracks or wheels just as easily.

Next week the designs will have to be narrowed down even further, as the art bible starts to take shape.

And I don't even want to think about the technical design document >.<

So that's it for another week - stay tuned for more Lobot-which-still-needs-a-new-name next Thursday!

And finally, a shout-out to the ever-awesome TN - thanks for your continuing interest in the project and all your encouragement =D
hydrolysistn on August 1st, 2007 10:52 pm (UTC)
Keep it up! =D
It sounds like a lot has begun to fall into place for hmm.. need a new name - codename: Project Newbot? XD

That idea about the... BS Buster? "Bull Sh ... Bot-Sense Buster/Bot-Sensor Buster" to make Belfy undetectable to other robots sounds good ;)

And about what you said:
II need to set out the goals of each session more clearly (eg, "today, we're here to decide [this], and at the end of it, we'll have [this]"), as well.

Not a bad idea Michelle :)
Though at first glance, the wording seems a bit inflexible and possibly a little confronting - somewhat feels like its shutting the door on ideas inadvertently, and could flatten motivation prematurely. I think it'd be useful to keep a card up your sleeve, though (i.e. "at the end, we'll have [this]"), but in order to keep the door open for the closing day, push forth discussion to try to conclude the topic, like...

First phase: At the start, "Alright, lets nail this down [within 2 hours/by lunchtime/by 4pm]."
Second phase: When the time limit rolls up, begin closing the door, "Alright, lets put it together to finish it up" (And also is the opportunity to pull out the 'card up your sleeve', to begin finalization if there's been little/no progress on the idea)
Third phase: "Kay, so we've got [explanation]. Any problems/inconsistencies? no? okay - great work =D." And file away.

I suppose its kinda a way to try to be passive-assertive. To build up momentum of motivation, rather than to slap and force motivation up to the ceiling. But it can be draining and patience stretching on ya ^^; And takes a bit of risk in regards that people's motivations snowball at different rates that we need to work with ^^; If ya decide to try this approach out, wish ya best of luck =D

Oh! If you're still curious as to why Max's Axes differ too things like Maya's - heres my two cents on it =p

Programs that have organic/animation modeling features (eg: "bones" and all that), people model using references mainly in the front and side views. Given that 2D reference images only have the X and Y axes - the only common axis that both front and side images share is the Y axis (i.e. the "vertical axis"). In the side view, the X axis of the 2D reference image is technically the Z axis if we looked at it from 3 dimensions, in things like Maya/Novedex. So to keep things consistent, it'd be logical to set the Y axis as the common Y axis found in both reference images (i.e. the vertical axis).

Make sense? ;P

Now, Autodesk's AutoCAD program is directed toward engineering purposes (and less for organic modeling like Maya/Max). Looking at how "reference images" are used in CAD development - their primary asset aren't really the front/side views. Their first and foremost reference they build from, is the Floor Plan of a building. And well - if the floor plan takes precedence before front/side views in CAD - then logically it'd make sense to make the X and Y axes the same as that of the 2D floor plan. Which leaves the Z axis to be the "vertical axis" - 3dSMax followed that axes system, and I'm guessing its because since architecture is older than 3d animation - CAD came first.
So if thats the case, then it'd be easier to grab source code from CAD, and modify/add to it to gear the program more for animation (hence creating 3dSMax). And I guess keeping the axes the same between AutoCAD and 3dSMax will give less headaches in the world of program compatibility, too.

Just my two cents ^^; Correct me if you disagree/find the real answer. But correct or not.. If it doesn't make sense, like it matters? The main question is how to deal with it ;P
And well, I say people work better when they're not curious about trivial things like "so why did they...?" So feed them a satisfactory conclusion until you get bored enough to go find the real answer ;P Regardless, most people don't -really- care for something so trivial after a while anyway ;P

So, hope this helps curve your curiosity a bit - and if other people remain curious (and if ya get what I mentioned here), you can pass on the info to them extinguish that curiosity of theirs. =P

Catch up with ya again next time, keep up the awesome work =D!
Take care, be safe
hydrolysistn on August 1st, 2007 11:03 pm (UTC)
Oh! Also...

You said:
"Anyway, it means that we're going to have to turn our brains sideways when we're looking at models, and probably put in some transformation rotations when we load everything, since Novadex and Renderware both use Y as the vertical axis."

I dunno if you'd already tried it (though it does sound like you have ;P just in case, though) - maybe the exporter already accounts for that transformation to swap the Z with the Y axes if exporting from 3dSMax? I dunno - but it might be worth a shot to try exporting to see if it really does convert the axes or not, before spending time to write functions you may not really need.

If you've already tried - disregard what I added on this reply XD