Saturday, December 31, 2011

Replicating Game States - revised

The last time I talked about my Undo system, which should also allow multiplayer through its structure of storing actions that can be transmitted over the network, was a long time ago. I don't want to directly talk about this today; you'll see.

Especially one thought has gone through my mind in the last weeks: Replicating a state across the network properly is nothing specific to Magic - so why do I implement it in the Core of Laterna Magica? I thought about it, and there doesn't seem to be a specific reason for doing so. Another result of that thought was that there are probably complete libraries out there which do exactly what I want. Well, that's unfortunately not the case, so I'm stuck/blessed with implementing it myself.

I haven't really gotten into development yet. I have a rough skeleton, but no really fancy functionality yet. But I'm having my thoughts. I think stepping away from the application's needs and looking at the general requirements of such a library help in developing a better piece of software. I haven't done this for my current undo system, which I find kind of disappointing. Normally, I don't have problems with spotting such generalizations.

Okay, back to topic: what does a library for state replication need?
  • There needs to a notion of what has happened on a higher, application level: Transactions
  • Transactions consist of atomic modifications, which are the core of the state
    • Optionally, they may also consist of other sub-transactions
  • Transactions and modifications must be transmittable over the network. If the whole state is represented as a tree of subtransactions under a root-transaction, then there's of course the need to transmit unfinished transactions. Otherwise, it might be enough to transmit finished transactions. I think that the first is generally more flexible. It might be a good idea to give the user a chance to mark transactions as atomic so that it's only transmitted when it is finished.
  • Probably the most interesting requirement is that, of course, after transmitting the state, both/all states have to be the same!
  • The system should be able to detect that there are state conflicts and that a transmitted transaction can't be carried out locally.
    • When we're already speaking of it, it's probably nice (but also probably impossible, so see that as an utopic remark) to resolve/merge such conflicts.
Now guess what I have an example for! Of course for the "most interesting" one in my list! It might seem at first that by defining modification, and making them granular enough to catch all actions in a system, there is no way that states may get out of sync. That's unfortunately not true. Let's take a simple case. This case actually only has a theoretical problem, but it's easier to illustrate and serves  the purpose. Our state consists of two variables, foo and bar. The user may only set foo, and when he does so, the program automatically changes bar. Now consider this:


When the state replication system transmits these two modifications, the receiver does its own set operation on bar, because foo has changed. Then, the original set operation is transmitted, setting bar another time (fortunately to the same value).

Until now, I thought that this particular case is just me wanting to be clean. Now that I wrote this, I even see more problems, real ones: If we're able to detect conflicts, then this shouldn't even run through, because the last modification should be the second but really is the third. And even if we don't: this second modification in State 2 needs to be transmitted to State 1. Even if that doesn't result in errors, which is likely, it will waste network bandwidth at least.

Now for the real problem I spotted here before: Let's replace "bar = 1" with "put a token onto the battlefield". Setting a number multiple times is redundant, but creating an Object is not. There is a very definite difference between one and two tokens, you know!

Now, let's solve these problems. There are basically two ways that can lead to this problem: a complex setter, and events.

In the first case, the setter for foo includes code that calls the setter of bar. The problem here is that our replication library can't know that setting foo doesn't really do this: it actually sets foo and bar. So, refactor foo's setter so that it does only what it says, and not more. Hide this from the user by adding another method doSomething() that does what he previously did by setting foo.

Second is events. In this case, the change to foo fires an event that leads to bar being changed as well. In this case, the simple answer that is not so simple to implement is: don't fire events while replicating the state. The more complex answer that is simpler to implement is that this is similar to the first case: Don't write events that fire on state changes and perform state changes. Instead, fire a new Event in doSomething() that triggers the change. As the state replication won't call doSomething(), everything is fine.

Both solutions shown here require changes in the application, not in the library. This might seem bad, but it's actually logical: The library may sanely assume that variables change and objects are created, and that it may do that by itself as well. If the state does not fulfill the contract that it just contains values, but instead employs complex logic that the library can't predict, the task is impossible. The library needs some way to change values without triggering application logic.

Thanks for staying wiht me, and a happy new year to all of you!

    Friday, December 30, 2011

    Java 5 compatibility is hard

    From the times when I read the bug thread of Forge (about a year ago or so), it seemed to me that a lot of users, especially those on Macs or at work, were still using Java 5. I have no idea how it is now, but I still try to code and compile Java 5 compatible for exactly that reason.

    Usually, it works out one way or another, but not today. I wanted to try JGroups (a story for another time, after I actually succeeded in it), but even the releases that are advertised (if you can call it that) as Java 5 compatible seem to be compiled in the Java 6 binary format. Since only part of the featureset I'm trying to implement depends on JGroups, I wanted to at least create compatible class files, even if using these features will fail on a Java 5 runtime.

    There are basically two inputs to the compiler: my own Java 5 compatible code, and the already compiled Java 6 compatible bytecode. My output of choice was Java 5 compatible bytecode. After all, this output does not contain anything not Java 5 compatible, so I thought I was good to go. Using a Java 5 compiler will of course fail, because it can't read the necessary classes, but Using a Java 6 compiler with compatibility settings should work...

    There are a few features new in Java 5, like generics, annotations, enums and for-each loops. It sounds reasonable that compiling Java 5 source code to Java 1.4 byte code won't work. Java 6 also has a new byte code format. It supports a new verification mechanism, and while the compatibility notes say that the compiler outputs this format only "by default", there is no way (I have found, at least) to change it: The most obvious way of setting the compiler arguments: javac -source 1.6 -target 1.5 gives an errror that source release 1.6 requires target release 1.6 - which exactly contradicts what is said in the compatibility notes.

    It turns out that the Java 6 compiler can take Java 6 bytecode input even if the source setting is 1.5. That there's an error for 1.6 + 1.5, even though the Java language hasn't changed, is kind of absurd in my opinion... well, better having an absurd solution than none at all...

    Tuesday, December 27, 2011

    Getting more out of Maven

    When I switched to Maven, I knew I wasn't using all of it's potential. I'm sure I still don't, but I've learned much nontheless. For the last months, I worked (for pay, that is) on another software project. The necessary steps to build the software were nontrivial, and I had the task and thus the time to research how to do it. Over the months I worked there, the project's POM has become a stably working thing, whihc indeed is nice when trying a relatively new technology. Speaking of new technologies, I finally managed to learn a little LaTeX, when we had to do the docs in the end... but that's another story. I wonder if I'll be able to ever bring that into a Magic context.

    Now that my internship/scholarship ended, I had a little time and wanted to clean up the messy POMs I previously created. My first step was to update my eclipse version. 3.7 Indigo comes with a new version of the Maven plugin. While I was sceptical during my internship, I found that it's pretty usable now. It even has some nice features that make it feel superior to the old one, e.g. the ability to jump to definitions of replacement variables, plugins, dependencies etc.; and the ability to view the "effective" POM. While the old version had that, I was never really sure it worked... could be that my POMs were just so simple that the physical and effective POMs were the same... Whatever, what I experienced was that viewing the effective POM of your project actually shows you how much configurations there are you could take a look at.

    I have the feeling that when one tries a tool to automate something, it first seems very nice and practical. Unfortunately, many tools show the "bad behavior" that when the ammount of things to manage grows, even that becomes confusing.
    The same thing was with Maven. The only feature I previously used was to declare a project's dependencies. As the Laterna project grew by separating parts of the software into different projects, there were now many components which had dependencies. Let's say I discover a bug in the Config file format project and increment its version number. If multiple modules depend on the file format, I have to change three things in total: the file format version number, and the two modules' dependency version numbers.
    As an alternative, I could try to have the dependency number only in one module, and since the second module has a dependency on the first (let's just assume that), it has an indirective dependency on the (now) new version. But that's also not a clean way of doing it; at least if the second really needs Config on its own. When the first module is changed so that it doesn't need config any more, this would mean I had to include it into the second then, and that's not too nice.

    Well, luckily, there's two features of Maven which do the job together. The first are the "management" sections: these declare configurations (such as versions) for dependencies, plugins and other things, without making them effective. Together with the second, this even sounds like something useful: parent projects.
    Every project can have a parent project from which it inherits its configurations. The dependencyManagement section allows me to specify dependency versions for every library in the parent POM without mandating that every child POM depends on all the libraries. The child POM can then add the identification of the library in the dependency section, and all the configuration from the management section applies to it.
    The same goes for plugins: I can specify that if a project uses the antlr plugin, it wants to invoke antlr's code generator so I don't have to duplicate that code over and over.

    So, I learned something new and exciting about Maven. I really like the design of it, how the predefined lifecycle is combined with the ability to add any plugin to do whatever job is necessary, and not only there's a way to do the necessary things, there's even a way to specify them in a clean, reusable manner.

    Monday, October 24, 2011

    The Oracle Parser

    Besides the structural changes to my software, the probably most visible evolution was the creation of a grammar to parse card text. To test my works, I use Arch's Magic Data, Version 2011-10-01, from where I extracted the abilities and replaced card names by a "~".
    Based on that, it seems that there are currently 19710 abilities (including duplicates) in Magic, of which I'm able to parse 5661 abilities. Of these, 4282 are keyword abilities (I didn't add the following keyword abilities yet: Landwalk, Protection, Affinity, Bands with others, Offering, Splice, Typecycling). This makes a total of 1379 non-unique, non-keyword abilities I currently support. It sounded better before calculating the numbers for the post :S

    Below is a diagram showing the evolution of my parser; the builds are made on a new-feature-available basis, and I usually deleted the statistics of any build that broke something that worked before.


    For a rough guidance, the builds 1 through 13 were made on October 18th, #14 to #19 was on 19th, #20 to #28 was on 20th, the others were on 21st and 22nd.

    Between #10 and #17, you can see how I added the keyword abilities, sometimes only a few, sometime a lot.
    The next ten builds seem to have done little, although it is a total of 300 abilities that were now supported. The changes included generalizing costs and effects, so that both "Discard a card: do something." and "Pay something: Discard a card." worked; additional costs: "As an additional cost to cast ~, pay something."; generalizing object matching for effects like "Counter target instant or sorcery spell."; and even modal effects like "Choose one — ~ deals 3 damage to target creature; or ~ deals 3 damage to target player."
    The small cut of 103 abilities that follows was a bug that accepted things such as "Counter target spell. You gain 3 life.", although gaining life was not yet supported: If there was a second sentence that could not be parsed, it was dropped without errors.
    Up to the present, I mostly extended object matching for things like "nonblack artifact creatures", generalized abilities (instead of only "Draw a card", I now also support "Target player draws two cards" and "each opponent draws X cards") and added easy new ones like life gain, life loss and paying life for a total of 257 newly supported abilites.

    But what is it worth to parse abilities without giving my program the opportunity to understand them? Remember, what I did is parse rules text, not (yet) support new cards/abilities in Laterna Magica. Out of the unstructured text format that is magic rules text, the parser creates a tree structure that is easier for software to work with. To show a few selected supported abilities:
    "As an additional cost to cast ~, discard a card."
    The "you" shows which player it is who discards a card. An alternative could be "target player".
    "Choose one — Counter target red spell; or destroy target red permanent."
    This one is a "SPELL_EFFECT": not an ability, but a part of a spell's resolution effect.
    As you see, "Choose two" and "Choose X" are equally supported.
    The "counter" and "destroy" effects both have their own targets that must be red spells or red permanents, respectively. The logical "and" shows that both conditions are necessary.
    "When ~ enters the battlefield, destroy target nonartifact, nonblack creature."
     The trigger is actually not so nice yet. I will restructure this, so that it's not "ETB SELF" but "ChangeZone ANY Battlefield SELF" or something like that.
    The destroy effect, again, uses logical predicates, this time including negations.
    "Suspend 4—{1}{U}"
    The keywords are all pretty similar. "KEYWORD" is a marker; neither "ACTIVATED_ABILITY" not "STATIC_ABILITY" nor anything else is universally applicable for keywords. The suspend keyword has a number of counters and a cost.

    That's it for now. I hope you get something out of this post!

    Friday, October 21, 2011

    What has gone on - Part 2

    Okay, I'm back, and as Google Code nowadays also supports Git, I stayed with it. The wretched old structure where LaternaMagica resided in the same SVN repository with diverse libs is now also gone, as google seems to support multiple repos per "project" now. So you can check out and commit changes to the different libraries without incrementing the revision number of the other projects. And for everyone too lazy to explore the multitude of individual projects, here is an overview.

    What has gone on - Part 1

    A lot, actually! Ever since the beginning of August, I found some time again to work on Laterna Magica. In my time "away" I have grown fond of the GIT version control system, using it together with SmartGit.
    Two of the things I like about it: Branching is not only central to, but also easily done in GIT, and you quickly lose your fear of it. The second is that it is distributed, which was important because I have no private SVN server available where I work now. The nice extra of this is that many operations are just so much faster than with SVN, because branching, history, committing etc. are all local. The downside is that, even though I try to be a good kid and commit and document my changes, I am the only one to see them...
    Which has to be changed! I'm going to upload my projects now to a git SCM site and them come back to post a little about my changes!

    Thursday, January 6, 2011

    Forward and backward chaining

    In the context of rule-based systems, programming means to create rules which are applied by the engine to a fact base. Forward and backward chaining describe the two possible ways how the engine may achieve this. I found a neat explanation here.

    Forward chaining starts with the facts: If a fact which triggers a rule exists, the rule is activated and may in turn modify other facts, and so on. A possible rule could be "if the floor is dirty, clean it"; it specifies when to do it

    Backward chaining starts with a goal and tries to find the necessary rules to achieve it. The engine goes back through the chain of causality to find the first step. A backward changing rule could be "to achieve a tidy floor, clean it"; it specifies how the goal can be reached, but doesn't explicitly state when to do it

    Both mechanisms are of course constantly used by humans, but the reasoning seems to be very different, or otherwise most rules engines would support both.

    Forward chaining can be used for implementing the rules, like triggered abilities, and possibly state based actions. Even activated abilities could be handled by modelling them as a sort of triggered ability which "triggers" on playing it.
    Characteristic changing effects can be handled effectively, because a rules engine watches its facts and can determine whether it's necessary to recalculate characteristics.
    This is actually the main reason I want to use a rules engine. My current approach recalculates characteristics every time they are read, which... well, I'm repeating myself. By getting rid of this need, I hope to have easier access to the core engine from other parts, like AI, GUI and network.

    Backward chaining is interesting especially for the AI: Backwards chaining tries to find a way from a current state to a goal, which is exactly what's done in a game. Drools, the rules engine I'm most likely going to use, does not support backward chaining yet, but the project plans to support it in a future release. Since the AI will be modular, such a backwards chaining AI could be supported once it's possible.