This adds a priority argument to scan_common rather than hard
coding it when inserting the work item and uses the newly
defined wiphy priority for scanning.
Work priority was never explicitly defined anywhere, and a module
using wiphy_radio_work APIs needed to ensure it was not inserting
at a priority that would interfere with other work.
Now all the types of work have been defined with their own priority
and future priorities can easily be added before, after, or in
between existing priorities.
- Mostly problems with whitespace:
- Use of spaces instead of tabs
- Stray spaces before closing ')
- Missing spaces
- Missing 'void' from function declarations & definitions that
take no arguments.
- Wrong indentation level
When this attribute is included, the initiator is requesting all
future frames be sent on this channel. There is no reason for a
configurator to act on this attribute (at least for now) so the
request frame will be dropped in this case. Enrollees will act
on it by switching to the new channel and sending the authentication
response.
While connected the driver ends up choosing quite small ROC
durations leading to excessive calls to ROC. This also will
negatively effect any wireless performance for the current
network and possibly lead to missed DPP frames.
Currently the enrollee relied on autoconnect to handle connecting
to the newly configured network. This usually resulted in poor
performance since periodic scans are done at large intervals apart.
Instead first check if the newly configured network is already
in IWD's network queue. If so it can be connected to immediately.
If not, a full scan must be done and results given to station.
With better JSON support the configuration request object
can now be fully parsed. As stated in the previous comment
there really isn't much use from the configurator side apart
from verifying mandatory values are included.
This patch also modifies the configuration result to handle
sending non 'OK' status codes in case of JSON parsing errors.
json_iter_parse is only meant to work on objects while
json_iter_next is only meant to work on arrays.
This adds checks in both APIs to ensure they aren't being
used incorrectly.
Arrays can now be parsed using the JSON_ARRAY type (stored in
a struct json_iter) then iterated using json_iter_next. When
iterating the type can be checked with json_iter_get_type. For
each iteration the value can be obtained using any of the type
getters (int/uint/boolean/null).
This adds support for boolean, (unsigned) integers, and
null types. JSON_PRIMITIVE should be used as the type when
parsing and the value should be struct json_iter.
Once parsed the actual value can be obtained using one of
the primitive getters. If the type does not match they will
return false.
If using JSON_OPTIONAL with JSON_PRIMITIVE the resulting
iterator can be checked with json_iter_is_valid. If false
the key/value was not found or the type was not matching.
First, this was renamed to 'count_tokens_in_container' to be
more general purpose (i.e. include future array counting).
The way the tokens are counted also changed to be more intuitive.
While the previous way was correct, it was somewhat convoluted in
how it worked (finding the next parent of the objects parent).
Instead we can use the container token itself as the parent and
begin counting tokens. When we find a token with a parent index
less than the target we have reached the end of this container.
This also works for nested containers, including arrays since we
no longer rely on a key (which an array element would not have).
For example::
{
"first":{"foo":"bar"},
"second":{"foo2":"bar2"}
}
index 0 <overall object>
index 1 "first" with parent 0
index 2 {"foo":"bar"} with parent 1
Counting tokens inside "first"'s object we have:
index 3 "foo" with parent 2
index 4 "bar" with parent 3
If we continue counting we reach:
index 5 "second" with parent 0
This terminates the counting loop since the parent index is
less than '2' (the index of {"foo":"bar"} object).
In file included from ./ell/ell.h:15,
from ../../src/dpp.c:29:
../../src/dpp.c: In function ‘authenticate_request’:
../../ell/log.h:79:22: warning: format ‘%lu’ expects argument of type ‘long unsigned int’, but argument 8 has type ‘size_t’ {aka ‘unsigned int’} [-Wformat=]
79 | l_log(L_LOG_DEBUG, "%s:%s() " format, __FILE__, \
| ^~~~~~~~~~
../../ell/log.h:54:16: note: in definition of macro ‘l_log’
54 | __func__, format "\n", ##__VA_ARGS__)
| ^~~~~~
../../ell/log.h:103:31: note: in expansion of macro ‘L_DEBUG_SYMBOL’
103 | #define l_debug(format, ...) L_DEBUG_SYMBOL(__debug_desc, format, ##__VA_ARGS__)
| ^~~~~~~~~~~~~~
../../src/dpp.c:1235:3: note: in expansion of macro ‘l_debug’
1235 | l_debug("I-Nonce has unexpected length %lu", i_nonce_len);
| ^~~~~~~
Some wpa_cli utilities return some result which isn't possible to
get with wait_for_event unless you know what the result will be.
This adds wait_for_result which just returns the first event that
comes in.
wait_for_event was checking the event string presence in the rx_data
array which meant the event string had to match perfectly to any
received events. This poses problems with events that include additional
information which the caller may not be able to know or does not care
about. For example:
DPP-RX src=02:00:00:00:02:00 freq=2437 type=11
Waiting for this event previously would require the caller know src, freq,
and type. If the caller only wants to wait for DPP-RX, it can now do that.
This is created by the python interpreter for speed optimization
but poses problems if copied to /tmp since previous tests may
have already copied it leading to an exception.
Since a Device class can represent multiple modes (AP, AdHoc, station)
move StationDebug out of the init and only create this class when it
is used (presumably only when the device is in station mode).
The StationDebug class is now created in a property method consistent
with 'station_if'. If Device is not in station mode it is automatically
switched if the test tries any StationDebug methods.
If the Device mode is changed from 'station' the StationDebug class
instance is destroyed.