This is somewhat of a hack, but the IWDDBusAbstract is a very
convenient abstraction to DBus objects. The only piece that restricts
it to IWD is the hardcoded IWD_SERVICE. Instead we can pass in a
keyword argument which defaults to IWD_SERVICE. That way other modules
(like EAD) can utilize this abstraction with their own service simply
by changing that service argument.
The interface was hard coded to wln0 which works when running single
tests but not when running multiple. Instead use the actual ifname
that hostapd is using.
Tests that DHCP using IWD's internal netconfig functions properly.
The actual IP address assignment is not verified, but since IWD does
not signal the connection as successful unless DHCP succeeds we
can assume it was successful by checking that the device is connected.
The process of actually starting dhcpd and configuring the interfaces
is quite simple so it was left in the autotest itself. If (or when)
more tests require IP capabilities (p2p, FILS, etc) this could be
moved into test-runner itself and be made common. The reason I did not
put it in there now is a) because this is the only test and b) more
complex DHCP cases are likely to develop and may require more than this
simplistic setup (like multiple APs/interfaces)
The AdHoc functionality in iwd.py was not consistent at all with
how all the other classes worked (my bad). Instead we can create
a very simple AdHocDevice class which inherits all the DBus magic
in the IWDDBusAbstract class.
Many tests waited on the network object 'connected' property after
issuing a Connect command. This is not correct as 'connected' is
set quite early in the connection process. The correct way of doing
this is waiting for the device state to change to connected.
This common code was replaced, hopefully putting to rest any random
failures that happen occasionally.
Some cleanup code got removed by mistake which cleared out any
hwsim rules before the next subtest. Without this the second test
would end up getting erroneous signal strength numbers in the scan
results causing a failure.
This got added in the re-write but a __del__ method was also
added to the Rule class as well. This caused problems if hwsim
cleaned up since it removed the rules, which caused each rule
to call __del__. Since the rule had already been removed there
was no longer a DBus interface which raised an exception.
Before the re-write there was interesting escapes being used for
set_neighbor. Curiously now hostapd fails to set the neighbor due
to these escapes so they have been removed.
Switched around hwsim rules with the IWD initializer to avoid
IWD periodically scanning before hwsim rules are in place. Removed
some unneeded code during teardown.
Changed to wait for DeviceState instead of network object as well
as moved hwsim rules ahead of the IWD initializer to avoid IWD
scanning before the rules are fully in place.
This test occationally failed, and it uses the old style of waiting
for connected on the network object instead of the device object.
The hwsim rule was also moved ahead of the IWD() initializer which
ensures that IWD doesn't scan before the rule can be set/processed.
This test occationally fails due to no hwsim rules. Basically we
were just expecting iwd to connect to one of 3 access points but
the ranking was equal, so it chose the first in the scan list.
Now a signal strength is assigned to each AP to steer IWD into
connecting to the expected AP.
As with other tests, wait on device state instead of the network
object. The connectivity test was also changed to not check for
group traffic since AP does not negotiate the IGTK at this time.
There were a number of fixes here. The waits were changed to wait
on the device state instead of the network state and hwsim rules
were removed after the test as to not interfere with future tests.
One of the rules was setting the signal to -10000 wich was causing
the ranking to be zero.
Updated testFT-SAE-roam to use the TestContext APIs as well as
fixed the failure which was introduced after requiring stricter
AKM logic for SAE networks. The new failure was due to the hostapd
config not including the standard SAE AKM which is actually
required by the spec.
Slower systems may not be able to make some timeouts that tests
mandated. All timeouts were increased significantly to allow tests
to pass on slow systems.
It is not safe to assume that the python dbus implementation will
wait for a method to return. The documentation says this with
respect to reply_handler/error_handler:
"If both are None, the implementation may request that no reply is sent"
To stay on the safe side we should always include the error/reply
handlers and wait for the operation to complete.
test-runner now supports interface name replacement inside hostapd
config files. Since a given test configuration doesn't know what
interface names there will be $ifaceN can be specified instead e.g.
rsn_preauth_interfaces=$iface0 $iface1
The $ifaceN values will be replace with actual interface names when
the test is started.
This patch also removes ctrl_interface inside the hostapd config
files as this is no longer required.
This test was unreliable since it was assuming a periodic scan would
happen at just the right time. Instead since we are expecting autoconnect
we can just wait for DeviceState.connected then after we are connected
verify the network was correct.
This test was never 100% reliable, and after the test-runner re-write
it became extremely unreliable. The issue came down to the very common
block of code thats present in many tests where we wait for obj.scanning
then not obj.scanning. This is fine when a dbus scan() is explicitly
done before, otherwise it could lead to problems. Without a dbus scan
explicitly called we are assuming a periodic scan will happen. If it
already happen the initial wait for obj.scanning will never return and
time out.
This probably needs to be changed in several tests, but for this specific
case we can remove the waits completely. Since
check_autoconnect_hidden_network has a 30 second wait on
DeviceState.connected this will ultimately time out if anything goes
wrong. There isn't any great reason to wait for scanning (for this test
specifically).
A minor style change was also made when initializing IWD. The values
passed in this test are now the default, so no arguments need to be
passed.
iwd.py was updated to use the TestContext APIs to start/stop
IWD. This makes the process managment consistent between starting
IWD from test-runner or from the IWD() constructor.
The psk agent is now tracked, and destroyed upon __del__. This is
to fix issues where a test throws an exception and never
unregisters the agent, causing future tests to fail.
The configuration directory was also chaged to /tmp by
default. This was done since all tests which used this used /tmp
anyways.
The GLib mainloop was removed, and instead put into test-runner
itself. Now any mainloop operations can use ctx.mainloop instead
Before hostapd was initialized using the wiphy_map which has now
gone away. Instead we have a global config module which contains
a single 'ctx'. This is the centeral store for all test information.
This patch converts hostapd.py to lookup instances by already
initialized Hostapd object. The interface parameter was removed
since all tests have been converted to use config= instead.
In addition HostapdCLI was changed to allow no parameters if there
is only a single hostapd instance.
This patch completely re-writes test-runner in Python. This was done
because the existing C test-runner had some clunky work arounds and
maintaining or adding new features was starting to become a huge pain.
There were a few aspects of test-runner which continually had to
be dealt with when adding any new functionality:
* Argument parsing: Adding new arguments to test-runner wasn't so
bad, but if you wanted those arguments passed into the VM it
became a huge pain. Arguments needed to be parsed, then re-formatted
into the qemu command line, then re-parsed in a special order
(backwards) once in the VM. The burden for adding new arguments was
quite high so it was avoided (at least by me) at all costs.
* The separation between C and Python: The tests are all written in
python, but the executables, radios, and interfaces were all created
from C. The way we solved this was by encoding the require info as
environment variables, then parsing those from Python. It worked,
but it was, again, a huge pain.
* Process management: It started with all processes being launched
from C, but eventually tests required the ability to start IWD, or
kill hostapd ungracefully in order to test certain functionality.
Since the processes were tracked in C, Python had no way of
signalling that it killed a process and when it started one C had
no idea. This was mitigated (basically by killall), but it was
no where close to an elegant solution.
Re-writing test-runner in python solves all these problems and will
be much easier to maintain.
* Argument parsing: Now all arguments are forwarded automatically
to the VM. The ArgParse library takes care of parsing and each
argument is stored in a dictionary.
* Separation between C and Python: No more C, so no more separation.
* Process management: Python will now manage all processes. This
allows a test to kill, restart, or start a new process and not
have to remember the PID or to kill it after the test.
There are a few more important aspects of the python implementation
that should now be considered when writing new tests:
* The IWD constructor now has different default arugments. IWD
will always be started unless specified and the configuration
directory will always be /tmp
* Any non *.py file in the test directory will be copied to /tmp.
This avoids the need for 'tmpfs_extra_stuff' completely.
* ctrl_interface will automatically be appended to every hostapd
config. There is no need to include this in a config file from
now on.
* Test cleanup is extremely important. All tests get run in the
same interpreter now and the tests themselves are actually loaded
as python modules. This means e.g. if you somehow kept a reference
to IWD() any subsequent tests would not start since IWD is still
running.
* For debugging, the test context can be printed which shows running
processes, radios, and interfaces.
Three non-native python modules were used: PrettyTable, colored, and
pyroute2
$ pip3 install prettytable
$ pip3 install termcolor
$ pip3 install pyroute2
The tests basically remained the same with a few minor changes.
The wiphy_map and in turn hostapd_map are no longer used. This
was already partially converted a long time ago when the 'config'
parameter was added to HostapdCLI. This patch fully converts all
autotests to use 'config' rather than looking up by interface.
Some test scripts were named 'test.py' which was fine before but
the new rewrite actually loads each python test as a module. The
name 'test' is too ambiguous and causes issues due to a native
python module with the same name. All of these files were
renamed to 'connection_test.py'.
First, looking for DeviceState.connected gives a much better indication
if we are actually connected vs the connected property on the network
object. Second, its good practice to also check that hostapd sees that
the station is connected.
Restarting hostapd from python was actually leaking memory and
causing the hostapd object to stay referenced in python. The
GLib timeout in wait_for_event was the ultimate cause, but this
had no come to light because no tests restarted hostapd then
used wait_for_event.
In addition, any use of wait_for_event after a restart would
cause an exception because the event socket was never re-attached
after hostapd restarted.
Now we properly clean up the timeout in wait_for_event and
re-initialize the hostapd object on restart.
Many tests force a reauth after the initial connection. When the tests
were written there was no way of ensuring the reauth completed except
waiting (IWD.wait()). Now we can wait for hostapd events in the tests,
which is faster and more reliable than busy waiting.
This test was not reliably passing. Busy waiting is not really reliable,
but in this specific case its really the only option as the blacklist
must expire based on time.
In certain cases the autoconnect portion of each subtest was connecting
to the network so fast that the check for obj.scanning was never successful
since IWD was already connected (and in turn not scanning). Since the
autoconnect path will wait for the device to be connected there really isn't
a reason to wait for any scanning conditions. The normal connect path does
need to wait for scanning though, and for this we can now use the new
scan_if_needed parameter to get_ordered_networks.
There is a very common block of code inside many autotests
which goes something like:
device.scan()
condition = 'obj.scanning'
wd.wait_for_object_condition(device, condition)
condition = 'not obj.scanning'
wd.wait_for_object_condition(device, condition)
network = device.get_ordered_network('an-ssid')
When you see the same pattern in nearly all the tests this shows
we need a helper. Basic autotests which merely check that a
connection succeeded should not need to write the same code again
and again. This code ends up being copy-pasted which can lead to
bugs.
There is also a code pattern which attempts to get ordered
networks, and if this fails it scans and tries again. This, while
not optimal, does prevent unneeded scanning by first checking if
any networks already exist.
This patch solves both the code reuse issue as well as the recovery
if get_ordered_network(s) fails. A new optional parameter was
added to get_ordered_network(s) which is False by default. If True
get_ordered_network(s) will perform a scan if the initial call
yields no networks. Tests will now be able to simply call
get_ordered_network(s) without performing a scan before hand.
These values were meant only to force IWD's BSS preference but
since the RSSI's were so low in some cases this caused a roam
immediately after connecting. This patch changes the RSSI values
to prevent a roam from happening.
'Connected' property of the network object is set before the connection
attempt is made and does not indicate a connection success. Therefore,
use device status property to identify the connection status of the device.
This test made it past the initial refactor to use HostapdCLI with the
'config' parameter. This avoids the need to iterate the hostapd map in
the actual test.
This test merely verifies hostapd receieved our measurement reports
and verified they were valid. Hostapd does not verify the actual
beacon report body. Really, the only way to test this is on an
actual network which makes these requests.
Hostapd has a feature where you can connect to its control socket and
receive events it generates. Currently we only send commands via this
socket.
First we open the socket (/var/run/hostapd/<iface>) and send the
ATTACH command. This tells hostapd we are ready and after this any
events will be sent over this socket.
A new API, wait_for_event, was added which takes an event string and
waits for some timeout. The glib event loop has been integrated into
this, though its not technically async since we are selecting over a
socket which blocks. To mitigate this a small timeout was chosen for
each select call and then wrapped in a while loop which waits for the
full timeout.
Its difficult to know 100%, but this random test failures appeared
to be caused by two issues. One was that get_ordered_network is being
checked for None, when it was returning a zero length array. Because
of this the scanning block was never executed in any cases. This was
fixed in the previous commit. The other issue was the disconnect at
the start of the tests. The disconnect will cause all pending scans
to cancel, which appeared to cause the scanning block below to be
skipped over quickly if the timing was right. Then, afterwards,
getting a single network failed because scanning was not complete.
If no networks are found, return None rather than an empty
array. This is easier to check by the caller (and was assumed
in some cases). Also add an exception to get_ordered_network
if no network is found.
If the config file passed in is not found we would continue and
eventually something else would fail. Instead immediately raise an
exception to be more clear on what is actually failing.
This autotest was manually creating the .known_network.freq file so
the UUID needed to be manually generated and updated for the test
to function correctly.