The temporary ban list is cleared when a network is connected to
successfully, and also in network_connect_failed. Unfortunately,
network_connect_failed is not called in all paths (i.e. during
autoconnect) since it messes with the state of secrets and passphrases.
Clear the list in network_disconnected() instead, since it is guaranteed
to be called in every circumstance.
This will be effectively the same as the CONNECTING state, but can be
used to enable differing behavior, depending on whether connection was
triggered by autoconnect or via D-Bus.
Code that walked the VHT TX/RX MCS maps seemed to assume that bit_field
operated on bits that start at '1'. But this utility actually operates
on bits that start at '0'. I.e. the least significant bit is at
position 0.
While we're at it, rename the mcs variable into bitoffset to make it
clearer how the maps are being iterated over. Supported MCS is actually
the value found in the map.
We seem to be not specifying the msize for the root filesystem, which
results in this warning being printed:
emu-system-x86_64: warning: 9p: degraded performance: a reasonable high msize should be chosen on client/guest side (chosen msize is <= 8192). See https://wiki.qemu.org/Documentation/9psetup#msize for details.
There doesn't seem to be much performance difference in the end since
iwd does not process large files.
This option has not been used in a very long time, and is of limited
utility since the only thing D-Bus debugging does is hexdumps the
content of D-Bus messages to the terminal.
The current calculation was giving erroneous results when it came to VHT
MCS index 4 and VHT MCS index 8 & 9.
Switch to a precomputed look up table and add a multiplication factor
for short GI.
These test cases depend on setting up the existing hostapd instance to a
set of known addresses, which might be different from what test-runner
sets. During this time, any scans might result in the old and the new
addresses used by hostapd to be found in the scan results.
Fix that by using start_iwd=0 which tells test_runner that the test
wants to start iwd itself. This delays starting iwd until after the
setUpClass routine has been called and hostapd configured properly.
Also use more sensible rssi values for the 'non-preferred' bss.
Otherwise, ranking BSSes by throughput can confuse the test logic
since both BSSes are ranked the same and either can be picked by
autoconnect.
Right now the --valgrind option logs to a static file named
'valgrind.log'. This means that for any test that run multiple
instances of iwd, output is lost for all invocations except the last.
Fix that by using a per-process log file and making sure that all log
files are printed to stdout when the test ends.
This approach isn't perfect since it is possible for the pid to be
reused, but better than the current behavior.
ap_reset() seems to be called whenever the AP is stopped or removed due
to interface shutdown. For some reason ap_reset did not remove the DHCP
server object, resulting in leaks:
==211== at 0x483879F: malloc (vg_replace_malloc.c:307)
==211== by 0x46B5AD: l_malloc (util.c:62)
==211== by 0x49B0E2: l_dhcp_server_new (dhcp-server.c:715)
==211== by 0x433AA3: ap_setup_dhcp (ap.c:2615)
==211== by 0x433AA3: ap_load_dhcp (ap.c:2645)
==211== by 0x433AA3: ap_load_config (ap.c:2753)
==211== by 0x433AA3: ap_start (ap.c:2885)
==211== by 0x434A96: ap_dbus_start_profile (ap.c:3329)
==211== by 0x482DA9: _dbus_object_tree_dispatch (dbus-service.c:1815)
==211== by 0x47A4D9: message_read_handler (dbus.c:285)
==211== by 0x4720EB: io_callback (io.c:120)
==211== by 0x47130C: l_main_iterate (main.c:478)
==211== by 0x4713DB: l_main_run (main.c:525)
==211== by 0x4713DB: l_main_run (main.c:507)
==211== by 0x4715EB: l_main_run_with_signal (main.c:647)
==211== by 0x403EE1: main (main.c:550)
==209== by 0x43E48A: netconfig_ipv4_select_and_install (netconfig.c:887)
==209== by 0x43E48A: netconfig_configure (netconfig.c:1025)
==209== by 0x41743C: station_connect_cb (station.c:2556)
==209== by 0x408E0D: netdev_connect_ok (netdev.c:1311)
==209== by 0x47549E: process_unicast (genl.c:994)
==209== by 0x47549E: received_data (genl.c:1102)
==209== by 0x4720EB: io_callback (io.c:120)
==209== by 0x47130C: l_main_iterate (main.c:478)
==209== by 0x4713DB: l_main_run (main.c:525)
==209== by 0x4713DB: l_main_run (main.c:507)
==209== by 0x4715EB: l_main_run_with_signal (main.c:647)
==209== by 0x403EE1: main (main.c:550)
Prior to the BSS blacklist a BSS based autoconnect list made
the most sense, but now station actually retries all BSS's upon
failure. This means that for each BSS in the autoconnect list
every other BSS under that SSID will be attempted to connect to
if there is a failure. Essentially this is a network based
autoconnect list, just an indirect way of doing it.
Intead the autoconnect list can be purely network based, using
the network rank for sorting. This avoids the need for a special
autoconnect_entry struct as well as ensures the last connected
network is chosen first (simply based on existing network ranking
logic).
It was observed that IWD's ranking for BSS's did not always
end up with the fastest being chosen. This was due to IWD's
heavy weight on signal strength. This is a decent way of ranking
but even better is calculating a theoretical data rate which
was also done and factored in. The problem is the data rate
factor was always outdone by the signal strength.
Intead remove signal strength entirely as this is already taken
into account with the data rate calculation. This also removes
the check for rate IEs. If no IEs are found the parser will
base the data rate soley on RSSI.
There were a few other factors removed which will be added back
when ranking *networks* rather than BSS's. WPA version (or open)
was removed as well as the privacy capability. These values really
should not differ between BSS's in the same SSID and as such
should be used for network ranking instead.
Both ext/supported rates IEs are obtained from scan results. These
IEs are passed to ie_tlv_init/ie_tlv_next, as well as direct length
checks (for supported rates at least, extended supported rates can
be as long as a single byte integer can hold, 1 - 255) which verifies
that the length in the IE matches the overall IE length that is
stored in scan_bss. Because of this, ie_parse_supported_rates_from_data
was doing double duty re-initializing a TLV iterator.
Intead, since we know the IE length is within bounds, the length/data
can simply be directly accessed out of the buffer. This avoids the need
for a wrapper function entirely.
The length parameters were also removed, since this is now obtained
directly from the IE.
The FT-over-DS procedure now authenticates with multiple BSS's
upon connecting. This causes list_sta() to return our address for
any authenticated APs. It has now been changed to work with this
new behavior, as well as a check that the station fully connected
to the expected AP initially.
Since netdev maintains the list of FT over DS info structs there is not
any need for station to get callbacks when the initial action frame
is received, or not. This removes the need for the callback handler,
user data, and response timeout.
Roam times can be slightly improved by sending out the FT-over-DS
action frames to any BSS in the mobility domain immediately after
connecting. This preauthenticates IWD to each AP which means
Reassociation can happen right away when a roam is needed.
When a roam is needed station_transition_start will first try
FT-over-DS (if supported) via netdev_fast_transtion_over_ds. The
return is checked and if netdev has no cached entries FT-over-Air
will be used instead.
The beauty of FT-over-DS is that a station can send and receive
action frames to many APs to prepare for a future roam. Each
AP authenticates the station and when a roam happens the station
can immediately move to reassociation.
To handle this a queue of netdev_ft_over_ds_info structs is used
instead of a single entry. Using the new ft.c parser APIs these
info structs can be looked up when responses come in. For now
the timeouts/callbacks are kept but these will be removed as it
really does not matter if the AP sends a response (keeps station
happy until the next patch).
This is to prepare for multiple concurrent FT-over-DS action frames.
A list will be kept in netdev and for lookup reasons it needs to
parse the start of the frame to grab the aa/spa addresses. In this
call the IEs are also returned and passed to the new
ft_over_ds_parse_action_response.
For now the address checks have been moved into netdev, but this will
eventually turn into a queue lookup.
test-runner will print out if files were left behind after a
test which lets the developer know something was not cleaned
up. But in this case test-runner should also remove these files
so they are not left, and printed, for each subsequent test.
This value sets the roaming threshold on 5GHz networks. The
threshold has been separated from 2.4GHz because in many cases
5GHz can perform much better at low RSSI than 2.4GHz.
In addition the BSS ranking logic was re-worked and now 5GHz is
much more preferred, even at low RSSI. This means we need a
lower floor for RSSI before roaming, otherwise IWD would end
up roaming immediately after connecting due to low RSSI CQM
events.
This is being added as a developer method and should not be used
in production. For testing purposes though, it is quite useful as
it forces IWD to roam to a provided BSS and bypasses IWD's roaming
and ranking logic for choosing a roam candidate.
To use this a BSSID is provided as the only parameter. If this
BSS is not in IWD's current scan results -EINVAL will be returned.
If IWD knows about the BSS it will attempt to roam to it whether
that is via FT, FT-over-DS, or Reassociation. These details are
still sorted out in IWDs station_transition_start() logic.
This will enable developer features to be used. Currently the
only user of this will be StationDiagnostics.Roam() method which
should only be exposed in this mode.