-
Notifications
You must be signed in to change notification settings - Fork 6.8k
build: setup slack notifications for circleci jobs #19608
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
build: setup slack notifications for circleci jobs #19608
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we instead update the failure message to express what repository the CI failure occurred in and have this message sent to the same slack channel as the FW failures currently go to?
@josephperrott What would the benefit be? If we mix various projects and branches in a single channel, that can get quite confusing I'd assume, and additionally others could get irrelevant messages for a different project. We could get around that by pinging a specific group, but that doesn't allow others to get notified (without being added to the Slack group). Right now, everyone could individually join the dedicated channel for a project they'd like to be notified about. |
@devversion makes sense to me, my initial thinking was around avoiding having yet another place where we have to watch for dev-infra notifications. Personally I like the idea of centralization, but I suppose I am the exception in terms of being someone who needs to track all of the failures across the repos. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@josephperrott Yeah. This also isn't primarily about dev-infra because we want to get notified about actual test failures / rebase conflicts too. Having it centralized is a fair point, yeah, but that's probably really just special to dev-infra members then 😄 |
Whenever a CI job fails in a non-PR build, the failure will be reported to the `#components-ci-failures` Slack channel. Obviously this can become very verbose as it will report every individual CI job, but that cannot be improved at the time of writing, as CircleCI does not support post-workflow logic. Interestingly there is also a legacy option in the old CircleCI UI for Slack notifications, but for the sake of not relying on that, we use the new recommended way of controlling notifications inside the actual CircleCI configuration.
d769665
to
7449f22
Compare
Whenever a CI job fails in a non-PR build, the failure will be reported to the `#components-ci-failures` Slack channel. Obviously this can become very verbose as it will report every individual CI job, but that cannot be improved at the time of writing, as CircleCI does not support post-workflow logic. Interestingly there is also a legacy option in the old CircleCI UI for Slack notifications, but for the sake of not relying on that, we use the new recommended way of controlling notifications inside the actual CircleCI configuration.
This issue has been automatically locked due to inactivity. Read more about our automatic conversation locking policy. This action has been performed automatically by a bot. |
Whenever a CI job fails in a non-PR build, the failure will be
reported to the
#components-ci-failures
Slack channel.Obviously this can become very verbose as it will report every
individual CI job, but that cannot be improved at the time of
writing, as CircleCI does not support post-workflow logic.
Interestingly there is also a legacy option in the old CircleCI UI
for Slack notifications, but for the sake of not relying on that,
we use the new recommended way of controlling notifications inside
the actual CircleCI configuration.
We want to do this as patch/rc branches can quickly become red due to
an unfortunate composition of cherry-picked commits. In those cases
we need to be notified as those failures should be resolved, so that
releases can be cut without issues, and that the snapshot builds are
still published as expected for these publish branches (same for master)