Ideas for a new preset file format #4
Replies: 3 comments 5 replies
-
Maybe if we want to have an archive of files we don't really need to use zip because we don't need compression. If there's already textures like in JPEG or PNG format they're already compressed and zipping them wouldn't reduce the file size. It would only add CPU usage to unzip them Unless they're done with compression level 0 which could be a good compromise. We could use something like tar. Another approach would be one big JSON blob and have the texture base64 encoded. It would a little annoying but it would be one simple file and most presets don't have textures so it could be optimized for the common simple case. And if you want a texture just base64 it. |
Beta Was this translation helpful? Give feedback.
-
Absolutely! In Project M's case, this is a double-feature as not only this helps serve visualization-artists who can easily read and edit these files but is also helpful on the back-end as it makes building tools easier too (like the proposed editor).
Agreed. Many older systems only support Zip by default and of the few file-managers with built-in archive support, some also only support Zip. No other format provides any tangible benefit and currently, uncompressed Project M data is under 30 mega-bytes so storing everything in memory isn't a problem (but of course they shouldn't all need to be in memory all the time either).
I know discussion on scripting (#5) is on-going but if at all possible, I'd like to see that implemented as script modules. For example: have a These could be extracted to a common staging folder then shared between scripts for better performance (JIT) and lower memory-footprint. It also promotes good, reusable code to be created and improved by the community; benefits artists who'd benefit the most from these scripts; and avoids creating and maintaining separate solutions for each of these things. Of course, Project M could also provide any number of built-in re-usable effects by default too but having artists and script-writers provide their own seamlessly by bundling them into visualization archives would benefit the community as a whole and ultimately also the end-user. If the reusable-effects, visualizations and scripts could share a common architecture, I think that'd be better than different solutions for everything to be maintained on their own.
I'd suggest one script file ( Both Lua and Python have features for organizing a script into more than one file, so a single mandatory entry point is what the vast majority of engines use - if more files are needed, the author should be able to manage that easily on their own.
I don't see this being an issue. First, how big is "drastic"? Bandwidth and disk-space are cheaper than ever. Some music players are hundreds of megabytes in themselves, people just watching 1080 You Tube videos are downloading a full giga-byte every 20 minutes... Second, since some of the files have versioning, the files being duplicate in the Zip can be de-duplicated at start up by only extracting a single copy to a staging folder. People creating visualization collections could further de-duplicate the files by deleting duplicates inside the archives. A utility could be provided that does that automatically. It could even move the dependencies to its own unified archive but that's probably over-engineering it. |
Beta Was this translation helpful? Give feedback.
-
There seems to be no objection to using JSON as the basic data format but let me support the idea anyway. I added a "visualization randomizer" idea to the wiki, which is something I'd be interested in developing, as I love randomizers and procedural-generation. Not using a common data-format would turn a relatively straight-forward project into a much larger time-investment, as needing to both parse and output an arbitrary format could be as much work or even more than the intended feature-set in itself - especially if the reusable code for it can't be used easily from a lighter scripting language or whatever other language one might prefer. Meanwhile, every language has built-in features to convert its native data-structures from and to JSON so I wanted to make a statement regarding the importance of this aspect, even if it's not likely to be contested or replaced. I prefer YAML to JSON personally (or some JSON super-set) but for the very same reasons I mentioned, I think JSON is a fine choice too. |
Beta Was this translation helpful? Give feedback.
-
I'd say one of Milkdrop's main reasons for its huge success was the simple and portable file format presets were shipped in: just a text file in INI format. While this is certainly good for a relatively "simple" visualizer with a fixed rendering process like Milkdrop uses, anything more versatile would quickly get to the limits of a single text file.
Keeping a new preset format open and easy to edit and ship would be key, but it should be a bit more flexible. A preset should just consist of a single file, so using an archive is mandatory. The easiest option here is using ZIP, as it is well established, doesn't use any (still) patented compression algorithms and can be read very easily as a data stream without the need of extracting the files somewhere. The only constraint is that access to the archive data requires random seeking, so the ZIP has to be stored in memory or put into a randomly accessible filesystem.
I've also thought about making reusable effects, similar to what AVS provided in the form of DLLs. In addition to presets, the visualizer could support effect packs which would provide a named and versioned library which can be used in presets, e.g. rendering certain parametric geometry or shaders. Having reusable effects would contradict the "one file per preset" rule above, but if done properly, it would help preset authors a lot.
Combining the above, here's my suggestion:
Preset
A ZIP file containing a few mandatory files, plus any additional resources required for this preset:
Having meshes and textures inside the ZIP can cause it to get quite large. If this is going to be an issue - we'll see.
Effects
Very similar to presets, but may or may not contain code - could just contain a shader with some textures or a mesh library:
Effect archives should also follow a specific naming pattern which must include the version. This is important because presets may need different, incompatible versions of the same effect. Versioning the file names would allow multiple versions of the same effect to be installed side-by-side.
The preset metadata file would then list its dependencies, so the visualizer can easily check all effects to be available before loading the preset, and log an error and skip it if some dependency requirement isn't fulfilled.
Possible Issues And Considerations
It may also be possible to package required effects inside preset archives. This would keep everything in a single file, but also drastically increase the size of the preset dir as lots and lots of code/data is being duplicated.
Editing and repackaging presets will also be more complicated, so we might have to put all the preset file related code into a separate library with a less restrictive license (e.g. MIT) to enable devs to use it in their applications to provide editing capabilites.
Another thing to take into consideration is future extensions to the file format. JSON already does a great job here, as additional keys won't affect existing parsers, but those presets may not work properly then. So the metadata file should also contain some kind of preset file version identifier to give the visualizer information on what to expect and possibly declining to load a preset if the file format is newer than the built-in parser can handle.
Beta Was this translation helpful? Give feedback.
All reactions