What does (0 << 12) mean in Swift?
33 Репутация автора
6776 Репутация автора
A construct like that is meant as either a place holder or documentation of a no-longer-supported feature. It means that including the value
kCGBitmapByteOrderDefault in a summation will yield the same value; hence it is only for documentation.
324060 Репутация автора
If you look at all of the relevant values, you see:
kCGBitmapByteOrderMask = kCGImageByteOrderMask, kCGBitmapByteOrderDefault = (0 << 12), kCGBitmapByteOrder16Little = kCGImageByteOrder16Little, kCGBitmapByteOrder32Little = kCGImageByteOrder32Little, kCGBitmapByteOrder16Big = kCGImageByteOrder16Big, kCGBitmapByteOrder32Big = kCGImageByteOrder32Big
0x7000 (i.e. the three bits after you shift over 12 bits;
0 << 12 is just a very explicit way of saying "if the bits, after you shift over 12 bits, are 0". Yes,
0 << 12 is actually
0, but it's making it explicit that
kCGBitmapByteOrderDefault is not when the whole
CGBitmapInfo value is zero (because there could be other meaningful, non-zero, data in those first 12 bits), but only when the bits after the first 12 are zero.
So, in short, the
<< 12 is not technically necessary, but makes the intent more explicit.
3243 Репутация автора
Per Apple Doc for
The byte order constants specify the byte ordering of pixel formats.
...If the code is not written correctly, it’s possible to misread the data which leads to colors or alpha that appear wrong.
The various constants for
kCGBitmapByteOrder mostly map to similarly named constants in
CGImageByteOrder, which does not have a "Default."
Those values are found in detail in the docs for
The one you asked about is the default, which as you noted bit-shifts 0, which is still 0, but as Rob notes the preceding/following bits still matter.
What you were missing is the other options:
kCGBitmapByteOrder16Little = (1 << 12)
16-bit, little endian format.
kCGBitmapByteOrder32Little = (2 << 12)
32-bit, little endian format.
kCGBitmapByteOrder16Big = (3 << 12)
16-bit, big endian format.
kCGBitmapByteOrder32Big = (4 << 12)
32-bit, big endian format.
These use different values depending on 16-bit vs 32-bit image, and whether you care about the least or most-significant digit first.
(0 << 12) follows the same format/process of shifting by 12. And, as Rob pointed out, the the first 12 bits and any following also have meaning. Using these other options has a different effect in how they're interpreted vs using the "Default"