Вопрос:

What does (0 << 12) mean in Swift?

swift bit-shift

161 просмотра

3 ответа

33 Репутация автора

In the docs I found an enum case defined as:

kCGBitmapByteOrderDefault = (0 << 12)

As far as I know, this means bit shift zero 12 times... which is still zero. What am I missing?

Автор: newtonian_fig Источник Размещён: 08.11.2017 10:31

Ответы (3)


0 плюса

6776 Репутация автора

A construct like that is meant as either a place holder or documentation of a no-longer-supported feature. It means that including the value kCGBitmapByteOrderDefault in a summation will yield the same value; hence it is only for documentation.

Автор: Bob Dalgleish Размещён: 08.11.2017 10:34

6 плюса

324060 Репутация автора

If you look at all of the relevant values, you see:

kCGBitmapByteOrderMask     = kCGImageByteOrderMask,
kCGBitmapByteOrderDefault  = (0 << 12),
kCGBitmapByteOrder16Little = kCGImageByteOrder16Little,
kCGBitmapByteOrder32Little = kCGImageByteOrder32Little,
kCGBitmapByteOrder16Big    = kCGImageByteOrder16Big,
kCGBitmapByteOrder32Big    = kCGImageByteOrder32Big

And kCGBitmapByteOrderMask is 0x7000 (i.e. the three bits after you shift over 12 bits; 0b0111000000000000).

So 0 << 12 is just a very explicit way of saying "if the bits, after you shift over 12 bits, are 0". Yes, 0 << 12 is actually 0, but it's making it explicit that kCGBitmapByteOrderDefault is not when the whole CGBitmapInfo value is zero (because there could be other meaningful, non-zero, data in those first 12 bits), but only when the bits after the first 12 are zero.

So, in short, the << 12 is not technically necessary, but makes the intent more explicit.

Автор: Rob Размещён: 08.11.2017 10:46

1 плюс

3243 Репутация автора

Решение

Per Apple Doc for CGBitmapInfo:

The byte order constants specify the byte ordering of pixel formats.

...If the code is not written correctly, it’s possible to misread the data which leads to colors or alpha that appear wrong.

The various constants for kCGBitmapByteOrder mostly map to similarly named constants in CGImageByteOrder, which does not have a "Default."

Those values are found in detail in the docs for CGImageByteOrderInfo

The one you asked about is the default, which as you noted bit-shifts 0, which is still 0, but as Rob notes the preceding/following bits still matter.

What you were missing is the other options:

kCGBitmapByteOrder16Little = (1 << 12) 16-bit, little endian format.

kCGBitmapByteOrder32Little = (2 << 12) 32-bit, little endian format.

kCGBitmapByteOrder16Big = (3 << 12) 16-bit, big endian format.

kCGBitmapByteOrder32Big = (4 << 12) 32-bit, big endian format.

These use different values depending on 16-bit vs 32-bit image, and whether you care about the least or most-significant digit first.

The "Default" (0 << 12) follows the same format/process of shifting by 12. And, as Rob pointed out, the the first 12 bits and any following also have meaning. Using these other options has a different effect in how they're interpreted vs using the "Default"

Автор: mc01 Размещён: 08.11.2017 10:59
Вопросы из категории :
32x32