When the target is a channel it is not in, it would treat it as a user;
which, if userCapabilityRequired is empty, would unconditionally send it
to a channel.
This would usually result in an error, that would be logged,
hence the loop.
Caused by 'rss announce add' triggering headline announces, that would
delay the execution of the 'remove' commands.
Thanks to @mapreri and @Unit193 for help in reproducing the issue
and confirming the patch.
Switch to standard setuptools import, add suggested entries to
pyproject.toml.
Remove the --clean argument. As the comment suggests I'm sure there
is history here, but having setup.py remove parts of the package does
not seem like something required at this point.
Also clean up the imports to remove unused and group them together at
the top.
nickFromHostmask now (legitimately) complains when it's getting @ or !
at the beginning of a hostmask; so we need to strip them before passing
it to nickFromHostmask.
Then re-add them before calling c.addUser, because it uses them to
sort users in the right sets (ops/halfops/voices).
Additionally, this commit replaces the hardcoded set of prefix chars
(`@%+&~!`) with the one advertised in ISUPPORT when possible.
When a driver's run() method crashes, supybot.drivers.run() marks it
as dead and sets its 'irc' attribute to None.
This would be fine for "normal" independent drivers (like Socket used
to be), because this driver would never be called again.
But now that we use select(), some other thread may hold a reference
to this driver in a select() call frame, and call the dead driver's
'_read()' method when there is data to be read from the socket.
There is already a safeguard in '_read()' in the case the socket could
be read from, but this safeguard was missing from _handleSocketError.
This caused the "live" driver's select() to crash, which propagagated
to its run(), which caused the driver to be marked as dead, etc.
Eventually, all drivers could die, and we end up with the dreadful
"Schedule is the only remaining driver, why do we continue to live?"
in an infinite loop.
* Fix joins to many channels
If you have enough channels that the 512 byte message limit on the JOIN
message is hit then limnoria was losing the channel that put it over the
limit and not including it in the next JOIN message. This resulted in
losing one channel for every JOIN message that pushed us over 512 bytes.
We fix this by generating the JOIN message immediately after resetting
the channels list to ensure we include the channel that pushed us over
the limit. Then the next time through our JOIN msg construction we'll
add subsequent channels without forgetting the one that pushed us over.
* Add test for channel join lists
This adds a test for the issue that is fixed in the previous commit. We
ensure that when JOINs are split over multiple messages we JOIN to all
channels that were part of the input list and don't forget any of them.