I want to generate learning sets of pictures for an AI experiment based on Open gym/universe and thought a lot about generating them automagically :-)
So far the idea was the following:
Record a path through a minecraft world that has a zombie in it.
Render one version of the "tour" normally. This I would call the "real world version".
Render another version with all textures set to total black, except the zombie textures, set them to all-white.
This version is the "masked world version".
Now I can pick single pictures from each version (it is much too expensive to use a video as an input for a neural network) and train the network what zombie "means". From here on it's actually getting easier everyday with tensorflow and such in the makings.
Excuse my lengthy explanation, if you are not bored yet, I may get to the question:
Is there a way to extend Replay Mod in a way that I wouldn't have to set all the textures by script (and get horrible artifacts in the images)?
In the end, I would love to write an extension where I can ask for every pixel: Is it part of some part of a zomibe? And if yes, set it to true, if not, to false. A 2-dimensional boolean array, so to speak. That would be incredibly awesome for me, as I could speed up the whole process immensely and finally try to apporach a meta scripting (that's still a far way to go, but what if the AI could generate it's own training sets.. Like one for zombies, one for dogs, one for lakes (esp. the lava kind! :-) )
And if yes, such a thing could be possible: Where do I start reading/getting involved?