celery: fix send_email to work with JSON encoding (Issue #363)
Long time ago, c935bcaf7086 introduced an optional User object parameter to the send_email task and used the computed full_name_or_username property. Due to the magic of pickle, that also worked when using Celery to run the task async.
Now, Celery 4 changed the default encoding from Pickle to JSON, which we anticipated in e539db6cc0da. That broke send_email in some cases, for example when a user comments on another user's changeset.
Fixed by passing the "From" name as string instead of passing the whole User object.
front-end: add eslint-plugin-html as dependency as introduced in .eslintrc.js in 4d36432bf705
Usage example: hg up -cr. sed -i -e 's/\${[^{}]*\({[^{}]*}[^{}]*\)*}/""/g' -e 's/%\(if\|else\|endif\|for\|endfor\)\>.*//g' -e 's/##.*//g' $(hg loc 'kallithea/templates/**.html') vim kallithea/templates/pullrequests/pullrequest.html +139 # blank out the multi line 'var url = ${}' ( cd kallithea/front-end/ && node_modules/.bin/eslint $(hg loc 'kallithea/templates/**.html')) | tee l hg up -Cr.
( sed -n 's/^function \([^(]*\).*/\1/gp' kallithea/public/js/base.js; sed -n 's/.* var \([^ ]*\) =.*/\1/p' kallithea/templates/base/root.html; echo pyroutes; echo CodeMirror; echo Select2; echo BranchRenderer;)|while read x; do echo $x; sed -i "/error .$x. is not defined /d" l; done; cat l
auth: also use safe password hashing on Windows using bcrypt
For unknown reasons, Kallithea used different hashing algorithms on Windows and Posix. Perhaps because problems with bcrypt on Windows in the past. That should no longer be an issue, and it doesn't make sense to have different security properties on the platforms.
While changing to bcrypt everywhere, also remain backwards compatible and accept existing passwords hashed with sha256 - both on Windows (where it used to be used), and elsewhere (in case a system has been migrated from Windows to Unix).
The "ROUTES" prefix might refer to "CELERY_ROUTES" ... but it doesn't take a simple string list anyway, so there is no point in treating it as a list value.
Kallithea only uses Celery results when repos are created or forked and user browsers are reloading pages to poll for completion. amqp seems like unnecessary complexity for that use case.
Sqlite does however seem like a minimal but fine solution for the Kallithea use case in most setups.
celery: set default config values in code and remove them from the generated .ini
It is hard to imagine any reason the user should change celery.imports . And if it ever should change, we want it controlled in code - not left stale in user controlled config files.
Everybody sould just use .json and there is no reason anybody should specify that in the .ini ... and it will be the default in Celery 4.
- but it is quite noisy. Some problems do however stand out as relevant to fix.
Script sections in HTML files can also be checked after removing mako markup:
hg up -cr. sed -i -e 's/\${[^{}]*\({[^{}]*}[^{}]*\)*}/""/g' -e 's/%\(if\|else\|endif\|for\|endfor\)\>.*//g' -e 's/##.*//g' $(hg loc 'kallithea/templates/**.html') vim kallithea/templates/pullrequests/pullrequest.html +139 # blank out the multi line 'var url = ${}' ./node_modules/.bin/eslint $(hg loc 'kallithea/templates/**.html') hg up -Cr.
- but that is even more noisy.
The noise is mainly due to eslint not knowing that everything runs together, with kallithea/templates/base/root.html defining global variables, kallithea/public/js/base.js using these and defining functions, which then is used "everywhere". There might be solutions to that - this is a starting point.
ui: show toggleable "Follow" status in repo groups' repo list
It makes sense to show Follow status next to repo names in the repo list, and it is a meaningful and efficient bulk operation to toggle Follow status there.
Clicking on the (Un)Follow 'heart' will toggle the caller's follow status for that repo.
The repo model already has layering violations - expand on them to compute the follow status of the current user.
(Changeset was cherry picked and modified by Mads Kiilerich.)
cleanup: run pyflakes as a part of scripts/run-all-cleanup
pyflakes has no usable configuration options, so create a small wrapper script. Instead of having two wrapper scripts (with one being almost nothing and the other containing configuration), just keep it simple and use one combined.
It has "always" been wrong, and we have thus been using the default of "pickle" - not "json" as we thought.
I doubt this change has any immediate visible impact. I guess it only means that it will use json for results instead of pickle. That might be more stable and debuggable.
Note: celery_config will uppercase the config settings and replace '.' with '_', so 'celery.result.serializer' turns into 'CELERY_TASK_SERIALIZER'.
mails: make error reporting by mail work with secure mail servers
Even with Kallithea mails working, TurboGears / backlash error reporting would fail like:
Error while reporting exception with <backlash.tracing.reporters.mail.EmailReporter object at 0x7f8f986f8710> Traceback (most recent call last): File ".../env/lib/python3.7/site-packages/backlash/tracing/reporters/mail.py", line 49, in report result = server.sendmail(self.from_address, self.error_email, msg.as_string()) File "/usr/lib64/python3.7/smtplib.py", line 867, in sendmail raise SMTPSenderRefused(code, resp, from_addr) smtplib.SMTPSenderRefused: (530, b'5.7.0 Must issue a STARTTLS command first.', 'kallithea@example.com')
The reasoning in hg _fix_path is no longer correct.
We only need it for stripping the trailing '/' ... but we would rather be explicit about that. (But it is also questionable if we actually want it at all - we should just pass the right data and fail on wrong data.)
.../site-packages/_pytest/doctest.py:381: in _mock_aware_unwrap return real_unwrap(obj, stop=_is_mocked) /usr/lib64/python3.7/inspect.py:511: in unwrap while _is_wrapper(func): /usr/lib64/python3.7/inspect.py:505: in _is_wrapper return hasattr(f, '__wrapped__') and not stop(f) .../site-packages/tg/support/objectproxy.py:19: in __getattr__ return getattr(self._current_obj(), attr) .../site-packages/tg/request_local.py:240: in _current_obj return getattr(context, self.name) .../site-packages/tg/support/objectproxy.py:19: in __getattr__ return getattr(self._current_obj(), attr) .../site-packages/tg/support/registry.py:72: in _current_obj 'thread' % self.____name__) E TypeError: No object (name: context) has been registered for this thread
pytest's doctest support is (in _mock_aware_unwrap) using py3 inspect.
Inside inspect, _is_wrapper will do an innocent looking: hasattr(f, '__wrapped__')
But if the code under test has un (unused) import of a tg context (such as tg.request), it is no longer so innocent. tg will throw: TypeError: No object (name: context) has been registered for this thread (which in py2 would have caught by hasattr, but not in py3.)
pytest will thus fail already in the "collecting ..." phase.
To work around that, use the hack of pushing a tg context in the top level pytest_configure.
py3: remove safe_unicode in places where it no longer is needed because all strings (except bytes) already *are* unicode strings
(The remaining safe_unicode calls are still needed and can't just be removed, generally because we in these cases still have to convert from bytes to unicode strings.)
py3: fix kallithea-cli ini parsing after ConfigParser got strict=True
ConfigParser in py3 defaults to strict=True and would thus reject our ssh logging hack of renaming config sections ... which cause duplicate section names.
Fortunately, fileConfig now also allows passing a ConfigParser, and we can avoid using io.StringIO .
py3: update ssh for base64.b64decode raising binascii.Error instead of TypeError
A command like: python -c 'import base64; base64.b64decode("QQ")' would fail in Python2 with: TypeError: Incorrect padding but in python3: binascii.Error: Incorrect padding
There is no point in creating dicts and then logging them as json. Also, json can't handle py3 bytes and it would fail on py3. (ext_json could perhaps handle bytes, but it seems better to keep it simple and explicit.)
If the default repr isn't good enough, it would be better to use pprint. But repr is good enough.