Image File vs Memory Size
When it comes to GS and memory allocation for image files there seems to be two conflicting views. In the GameSalad Cookbook it makes the rather bizarre (to me at least) statement:
"Essentially, iOS will automatically scale your videos to the nearest of a set of pixel ratios."
I find it a little bizarre given that GS doesn't support videos, no mention of them appear prior to this and iOS doesn't scale your non-existant videos !? What I think the Cookbook is attempting to say is that iOS allocates (rather than scales) a section of memory to your images (rather than your videos) according the following fixed 'square' units:
16 x 16 / 32 x 32 / 64 x 64 / 128 x 128 . . . etc.
The example given is:
" . . . if your image is 50x30 pixels, it will take up the same memory as a 64x64 image."
And further illustrated with . . .
" . . . if you have an image that is 130x20 pixels, you may wish to scale the one side down by 2 pixels, or it could take as much as 256x256.
Ok . . . . . so far so good . . . that all makes sense . . . there is also a video - "The video will hopefully help clarify this issue to help you optimize your artwork!".
But the video doesn't really clarify things, in fact it says something very different, it says that when it comes to memory allocation the dimensions of an image are individually quantized to the next largest number (in the 16 / 32 / 64 / 128 . . . system).
The example given is: 30 x 66, the video says that 30 x 66 will be held in memory as 32 x 128. But the logic followed in the Cookbook says 30 x 66 will be held in memory as 128 x 128.
The video gives a second example: 128 x 600, the video says that 128 x 600 will be held in memory as 128 x 1024. Whereas the Cookbook's logic would see a 126 x 600 pixel image held in memory as a 1024 x 1024 image.
One source is saying that iOS memory allocation is fixed to 'square number' pixel values (16 x 16 / 32 x 32 / 64 x 64 / 128 x 128 . . . ) whereas the second source is saying that iOS memory allocation treats each dimension individually (so we can have 64 x 32 / 64 x 16 / 128 x 64 / 128 x 32 . . . etc etc).
I'm at the stage of optimizing my images and as you can imagine there is a large discrepancy between the two (where one would tell you 500 x 12 would steal a 512 x 512 piece of memory, the other is claiming it would only steal 512 x 16 - as you can see it really makes a difference)
Q: Does anyone know which is correct ?
"Essentially, iOS will automatically scale your videos to the nearest of a set of pixel ratios."
I find it a little bizarre given that GS doesn't support videos, no mention of them appear prior to this and iOS doesn't scale your non-existant videos !? What I think the Cookbook is attempting to say is that iOS allocates (rather than scales) a section of memory to your images (rather than your videos) according the following fixed 'square' units:
16 x 16 / 32 x 32 / 64 x 64 / 128 x 128 . . . etc.
The example given is:
" . . . if your image is 50x30 pixels, it will take up the same memory as a 64x64 image."
And further illustrated with . . .
" . . . if you have an image that is 130x20 pixels, you may wish to scale the one side down by 2 pixels, or it could take as much as 256x256.
Ok . . . . . so far so good . . . that all makes sense . . . there is also a video - "The video will hopefully help clarify this issue to help you optimize your artwork!".
But the video doesn't really clarify things, in fact it says something very different, it says that when it comes to memory allocation the dimensions of an image are individually quantized to the next largest number (in the 16 / 32 / 64 / 128 . . . system).
The example given is: 30 x 66, the video says that 30 x 66 will be held in memory as 32 x 128. But the logic followed in the Cookbook says 30 x 66 will be held in memory as 128 x 128.
The video gives a second example: 128 x 600, the video says that 128 x 600 will be held in memory as 128 x 1024. Whereas the Cookbook's logic would see a 126 x 600 pixel image held in memory as a 1024 x 1024 image.
One source is saying that iOS memory allocation is fixed to 'square number' pixel values (16 x 16 / 32 x 32 / 64 x 64 / 128 x 128 . . . ) whereas the second source is saying that iOS memory allocation treats each dimension individually (so we can have 64 x 32 / 64 x 16 / 128 x 64 / 128 x 32 . . . etc etc).
I'm at the stage of optimizing my images and as you can imagine there is a large discrepancy between the two (where one would tell you 500 x 12 would steal a 512 x 512 piece of memory, the other is claiming it would only steal 512 x 16 - as you can see it really makes a difference)
Q: Does anyone know which is correct ?
Comments
@SlickZero
Yep, SlickZero is absolutely right, just did a rather boring hour of testing - with a 100 frame image sequence (@512x512 / 512x129 / 512x128 / 512x65 and 512x64 . . . and so on) and without a doubt the video's (TShirtBooth's) interpretation is correct and the Cookbook is mistaken, the figures from the viewer's stats readout (when viewing on an iPad and iPhone) were unambiguous:
512x512 (100 frames) used up 15mb
512x256 (100 frames) used up 7.5mb
512x129 (100 frames) used up 7.5mb
512x128 (100 frames) used up 3.8mb
And so on . .
So - our hypothetical 512 x 12 pixel image is allocated 512 x 16 pixels worth of memory (rather than 512 x 512).
So where the Cookbook says ". . if your image is 50x30 pixels, it will take up the same memory as a 64x64 image" it should really say ' . . . it will take up the same memory as a 64x32 image'.
And where it says: "if you have an image that is 130x20 pixels, you may wish to scale the one side down by 2 pixels, or it could take as much as 256x256", it should really say: '. . . or it could take as much as 256x32' - quite a big discrepancy !