#pragma bitfields=reversed typedef struct { unsigned :1; unsigned int code:3; unsigned :26; const unsigned flag1:1; unsigned flag:1; } tIO_STATUS; typedef struct { unsigned :1; unsigned int start:1; unsigned :30; } tIO_STATUSA; #pragma bitfields=default typedef union { tIO_STATUS; tIO_STATUSA; } tIO_STATUS2; #define IO_ADR 0x20000004 volatile tIO_STATUS2 * const pio_device = (tIO_STATUS2 *) (IO_ADR); pio_device->code = 3; while (pio_device->flag) {}; pio_device->start=1; , but the creation of two additional types is somewhat redundant (in my opinion). #define BITNUM 2 // 0 #define BITMASK (1<<BITNUM) pio_device->code |= (1<< BITNUM); // pio_device->code &= ~BITMASK;; // Note that the compiler does not control the validity of values for such operations (as opposed to assigning a constant). Also note that the bit number is applied, from which a bitmask is created by shifting. I do this because it is more difficult to make a mistake in dialing a number than in the bitmask dialing (0x40000000), and you still have to think in your head, and there is no difference in the code (but this is, of course, a matter of taste). But now a really serious remark - all authors of articles on embedded programming (including myself) categorically DO NOT recommend using such constructions in the text, but define macros to set and reset bit fields #define SETBIT(DEST,MASK) (DEST) |= (MASK) #define CLRBIT(DEST,MASK) (DEST) &= ~(MASK) continue to use them only. SETBIT(pio_device->code,1 << BITNUM); CLRBIT(pio_device->code,BITMASK); First, you will not make an offensive mistake, forgetting to put a bitwise negative (~) in the second case or put a logical negative (!) Instead of it (those who have never made such a mistake are very attentive people, I, unfortunately, to them do not belong). Second, by switching to bit-addressable MK, you can redefine this macro (only for single bits), taking into account the capabilities of the hardware, and get a significantly faster code. Thirdly, if (when) you have to turn these operations into atomic ones, it is much easier to do this in the definition of a macro, rather than chasing them throughout the program. #define IO_DEVICE_START 3 #define IO_DEVICE_STOP 2 pio_device->code=IO_DEVICE_START; what is most often done. So the magic number has disappeared, even the check is on matching the size of the bit field, but the expression pio_device->code=1; the compiler will be skipped as valid. That is, the task of controlling the value of admissibility falls on the shoulders of the developer and is implemented by an ASSERT. The method is quite efficient, often used and quite acceptable if it were not more convenient, namely the use of an enumerated type: #pragma bitfields=reversed typedef struct { unsigned :1; enum { O_DEVICE_START=3, IO_DEVICE_STOP=2, } code:3; unsigned :26; const unsigned flag1:1; unsigned flag:1; } tIO_STATUS; #pragma bitfields=default pio_device->code=IO_DEVICE_START; SETBIT(pio_device->code,BITMASK); pio_device->code |= BITMASK; pio_device->code=pio_device | BITMASK; Let's pay attention to the fact that in the last line we will receive a warning about the incompatibility of types, and in the two previous ones that do the same, we will not receive (this is not a bug, this is a feature like this). Why is this method more convenient? First, we can place the enumeration of possible values directly in the body of the structure description, which is more readable. Secondly, the compiler will check the values in the definitions and will not allow us to go beyond the size of the field. Third, and most importantly, the compiler will not allow us to assign an invalid (not specified in the list) value to the field, although it leaves us with a loophole shown in the penultimate line (if anyone knows how to close it, write). In short, everything is wonderful and wonderful, BUT you can not use such a construction in any compiler, since the C standard does not allow using anything for the bit fields except int. In addition, even in IAR, an additional compiler directive - enum_is_int is needed to ensure proper alignment. But if you are not afraid of compiler dependency, then the method is very beautiful, transparent and convenient (I agree in advance with those who write in comments that this will greatly reduce the possibility of porting). dev_data_r_w (int n, int data_command, int r_w, int *adr) { ... }; int dev_data(int n, int data_comand, iint *adr) { return dev_data_r_w (n, 1, 1, int *adr); int read_dev(int n, int *adr) { return dev_data(n,1,adr); }; int ch_read_dev( int *adr) { return read_dev(1,adr); }; it is easy to see that the first function does the real work, and all the others create wrappers for it, in order not to write the corresponding constant parameters. In C ++ (and in a number of others), this problem is removed by the default parameter values, but for C it is still relevant. My personal opinion is not to do that. If dynamic type conversion is not required, use macros to create convenient (easy to use) synonyms for a common function: #define dev_data(N,DC,ADR) dev_data_r_w ((N),(DC),1,(ADR)) #define read_dev(N,ADR) dev_data((N),1,(ADR)) #define ch_read_dev(ADR) read_dev(1,(ADR)) Such a definition is not more difficult, slightly loses in code size, but wins in execution time and memory size used. Such multi-tier constructs in interrupt handling routines are especially appealing. And one more observation - for some reason, some programmers (if there are such readers among them, write why) consider that the creation of their own enumeration type enum { SET=1, RESET=0 } ACTIVE; - that's cool. I can still understand when this type is used to write a value to a bit, but when to control its value? It seems to me that the type of bool completely replaces this type, although who knows, I am ready to listen to other opinions.Source: https://habr.com/ru/post/222061/
All Articles