One of the key paradigms to modding in this community is the idea of working with individual files. When we reverse engineer, we look to specific blocks of data. When we document them, we create articles for each file and directory. And when we build modding tools, we build them to manipulate these specific files.
There are advantages to this paradigm, but limitations, too. There are times where a specific mechanic (say, limit breaks) is spread over several files, and a user must co-ordinate several tools. Tools for particularly important files can become bloated or difficult to use, offering a range of functionality when a modder probably only wants to edit a specific mechanic or element of his game (unless he's a casual tinkerer, in which case, the more options the better). Mods usually have to contain a variety of small diffs and patches that end-users find fiddly to apply. Tools built to manipulate particular files usually support only a limited number of releases. Finally, the whole idea of working with individual files fails with games like FFIX, which completely strips filenames from the PSX iso (like many later PSOne games).
What I suggest is a completely different way of handling the process. Imagine a generic 'compiler / decompiler' (I use the terms loosely) that can transform PSX releases into plain-text files that 'describe' the features and the mechanics of the game. After modifying these plain-text files (perhaps XML-based?), our 'compiler' can turn them back into a playable PSX ISO.
Why would this offer advantages?The benefits aren't obvious at first sight, but think about it:
- it makes it a lot easier to combine mods. If a file or data section is compressed, simply applying two patches on top of each other could be disastrous. But with plaintext mod 'descriptors', it's a lot simpler. And when two mods clash in some way, it's a lot easier for the user to choose which parts of the mod he does or doesn't want (because he's comparing to plaintext configurations, not compressed binary data)
- no issues around distributing complete mod files. They're just plain text. No proprietary data.
- tools and their developers can specialize. Currently, if a particular tool is inefficient or buggy at compressing / opening data, but has a great feature, users are stuck with enduring the slowness and the crashing for the sake of the benefit. But when raw data handling and game manipulation are separate, tool developers can stick to their specialities
- ensures open-source development. In the current model, if only one developer knows a data section well, but leaves the community / gets too busy / gets run over by a bus, the tool stagnates. By separating our tools, we spread our bets and reduce the risks of yet another tool being abandoned by its creator.
- mods become compatible with all and any versions of the game. If the compiler / decompiler can export to a non-occidental release, I can send my own mod - developed for UK PAL - to, say, a friend in South Africa. No issues.
- Very easy to document mods. If a mod is just an XML file, then all it takes is an XSLT file and a bit of CSS to create a full, glossy set of release notes
- Easy-to-edit games. Even if a GUI tool doesn't exist, astute users can still fire up a text editor to make their changes
- Single-sourced data. Imagine, as a translator, being able to go to a single resource file that lists all the terms consistently used in the UI. Imagine being able to transform all instances of the word 'dexterity' into a better translation. When we're producing a descriptive file with semantic, rather than binary data, we encourage those sorts of workflows.
That's my thinking, anyway.
The question is: is this really a better model of working, and is it even feasible?