Skip to content

Switch to github actions #1581

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 49 commits into from
Jan 6, 2021
Merged

Switch to github actions #1581

merged 49 commits into from
Jan 6, 2021

Conversation

cbaker6
Copy link
Contributor

@cbaker6 cbaker6 commented Dec 16, 2020

  • moved all builds to actions except for Carthage (left on circle because it's resource hungry)
  • fixed some test cases for iOS so they work on Xcode 12. Changes made are not backwards compatible to Xcode 11, seems there are some differences in the way objc and some of the XCTest methods behave
  • Updated the code in ParseUIDemo, some of the code was deprecated awhile ago
  • Added in linker flags for ParseUI project (more here)

@cbaker6 cbaker6 marked this pull request as draft December 16, 2020 19:27
@cbaker6
Copy link
Contributor Author

cbaker6 commented Dec 16, 2020

@drdaz can you look at why ParseUI isn't building? My guess is it has something to do with this warning https://github.com/parse-community/Parse-SDK-iOS-OSX/runs/1566382143?check_suite_focus=true#step:5:762

as the previous travis build didn't show it.

All of the other builds seem to be fine. Also, ParseUI doesn't give an issue during the "release" build.

@cbaker6
Copy link
Contributor Author

cbaker6 commented Dec 16, 2020

@TomWFox can you remove travis from branch protections and replace them with actions? This build is working the same as it was before with a couple of random failures that were already present. The only different issue is the ParseUI one I mentioned earlier.

@cbaker6
Copy link
Contributor Author

cbaker6 commented Dec 16, 2020

The circleci:ios and circleci:macos can be removed also

@cbaker6 cbaker6 requested a review from a team December 16, 2020 21:40
@TomWFox
Copy link
Contributor

TomWFox commented Dec 16, 2020

Done

@cbaker6 cbaker6 marked this pull request as ready for review December 16, 2020 21:54
mtrezza
mtrezza previously approved these changes Dec 17, 2020
Copy link
Member

@mtrezza mtrezza left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, just realized that Parse UI is still failing, but it seems to be something trivial?

⚠️  /Users/runner/work/Parse-SDK-iOS-OSX/Parse-SDK-iOS-OSX/ParseUI/Classes/SignUpViewController/PFSignUpView.m:216:9: Value stored to 'currentY' is never read
        currentY = CGRectGetMaxY(frame);

@mtrezza mtrezza self-requested a review December 17, 2020 12:58
@mtrezza mtrezza dismissed their stale review December 17, 2020 12:59

Parse UI still failing

@drdaz
Copy link
Member

drdaz commented Dec 17, 2020

@drdaz can you look at why ParseUI isn't building? My guess is it has something to do with this warning https://github.com/parse-community/Parse-SDK-iOS-OSX/runs/1566382143?check_suite_focus=true#step:5:762

as the previous travis build didn't show it.

All of the other builds seem to be fine. Also, ParseUI doesn't give an issue during the "release" build.

I'll take a longer look at it this afternoon... but I think it's possible the failure relates to the FBSDK update. The script that fails grabs the latest FBSDK binary, rather than a specific version. And since I think #1580 fixes some breaking changes with the naming of the packages, this smells of smoke.

As to why the previous travis build didn't fail here... perhaps the last build was long enough ago that it was before the breaking changes?

I'll check it out.

@drdaz
Copy link
Member

drdaz commented Dec 17, 2020

The build script was using a dead URL. Let's see if this helps.

@cbaker6
Copy link
Contributor Author

cbaker6 commented Dec 17, 2020

The build script was using a dead URL. Let's see if this helps.

Is it possible to make the script download the older FBSDK that we know worked? Your previous comment about Travis being built with an older version sounds like a logical explanation to why this one may be failing

@drdaz
Copy link
Member

drdaz commented Dec 17, 2020

Is it possible to make the script download the older FBSDK that we know worked? Your previous comment about Travis being built with an older version sounds like a logical explanation to why this one may be failing

I thought so too 😃. But my idea seems to have been based on an incorrect assumption that the names of the packages were changed. Now that I've downloaded the newest, I can see that isn't the case. It would be nice if the script returned something more useful that 'failed'... but I'm not sure if we can get better granularity in this output?

Can you reproduce the issue locally? I'll grab a copy of Xcode 11 and see if I can.

@cbaker6
Copy link
Contributor Author

cbaker6 commented Dec 17, 2020

Previously, I haven’t been able to get tests to run locally for this repo, I was always depending on the CI, but I will try again.

@cbaker6 cbaker6 marked this pull request as ready for review December 19, 2020 04:02
@cbaker6 cbaker6 requested a review from drdaz December 19, 2020 04:03
@cbaker6
Copy link
Contributor Author

cbaker6 commented Dec 19, 2020

I also can't replicate the errors in CI right now... my build fails much earlier, claiming it can't find FBSDKLoginManager.h while building PFFacebookMobileAuthenticationProvider. And I can't quite see why.

I believe carthage update should fix this issue

Make sure added features have some minimum amount of tests.
…ingDataFromExtensionsSandbox and testMigratingDataFromMainSandbox
@cbaker6
Copy link
Contributor Author

cbaker6 commented Dec 19, 2020

@drdaz I suspect most of the random test failures in iOS/macOS are related to threading issues with waitForResult. It's possible XCTest is not honoring how the framework is turning async calls into sync calls which is why the random errors pop up. I made a change in a couple of test cases above that seem to be most common failures I seen. Hopefully they don't pop up anymore in the future.

@cbaker6
Copy link
Contributor Author

cbaker6 commented Dec 19, 2020

This one is ready for review.

Circle is having some issues reporting, but it passes here: https://app.circleci.com/pipelines/github/parse-community/Parse-SDK-iOS-OSX?branch=githubactions

@cbaker6
Copy link
Contributor Author

cbaker6 commented Jan 5, 2021

@drdaz @mtrezza can one of you look this over? I don’t think the CI is running on the other PRs anymore.

@cbaker6 cbaker6 requested a review from a team January 5, 2021 14:10
@drdaz
Copy link
Member

drdaz commented Jan 5, 2021

I think this looks awesome! Really happy to see the bulk of the build moved to Actions... so much nicer to work with :)

Happy New Years and all that jazz 🎉

@drdaz I suspect most of the random test failures in iOS/macOS are related to threading issues with waitForResult. It's possible XCTest is not honoring how the framework is turning async calls into sync calls which is why the random errors pop up.

It would be so cool if they went away. I'm curious as to why you think waitForResult would act funny? Not doubting it, just interested in your reasoning.

// Innaccessible bundle identifiers should throw
XCTAssertThrows(configuration.applicationGroupIdentifier = @"someBundleIdentifier");
// This is nil is no path is set
XCTAssertNil(configuration.applicationGroupIdentifier);
Copy link
Member

@drdaz drdaz Jan 5, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This changes what's being tested.

Following the call stack, it looks like we should throw here if we can't access the container corresponding to the application group ID. Maybe I'm rusty, but I can't see why the previous test would ever have thrown an exception? The call at the end of the stack NSFileManager containerURLForSecurityApplicationGroupIdentifier: doesn't seem to throw?

EDIT: Ahh it was rust. PFParameterAssert should fire an exception when [PFFileManager isApplicationGroupContainerReachableForGroupIdentifier:applicationGroupIdentifier] returns false. I can't see what could cause the old test to fail (even though it did).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My guess here was something changed in Objective-C between versions of Xcode. I think this because I tested on Xcode 11 and 12 and checked the debugger. I've seen somethings switch to nil and this was one of them. If you run this test in Xcode 11 it will fail. Thinking forward, I switched to the Xcode 12 way

Copy link
Member

@drdaz drdaz Jan 5, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what could have been changed to nil, but this change leaves the logic in setApplicationGroupIdentifier: untested:

- (void)setApplicationGroupIdentifier:(NSString *)applicationGroupIdentifier {
    PFParameterAssert(applicationGroupIdentifier == nil ||
                      [PFFileManager isApplicationGroupContainerReachableForGroupIdentifier:applicationGroupIdentifier],
                      @"ApplicationGroupContainer is unreachable. Please double check your Xcode project settings.");

    _applicationGroupIdentifier = [applicationGroupIdentifier copy];
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right. The issue is in Xcode 12, it's letting this be set and isn't throwing. The change I added only tests if the default is nil.

We need to find out why it isn't throwing anymore and allowing applicationGroupIdentifier to be set.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this was meant to test ParseClientConfiguration.setApplicationGroupIdentifier and not setting directly. Only this is suppose to use something equivalent to willSet in Swift

Copy link
Member

@drdaz drdaz Jan 6, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something that confused me a little here was actually which Xcode version Actions is using. I see macos-latest in the yml, but the release build passes. And release currently fails in Xcode 12 due to the conflicting architecture situation, unless there's something in this changeset that fixes it that I'm missing?

Personally I don't have an issue with letting that test failing test through, but I think the repo rules won't allow us to merge with that error.

EDIT: Perhaps we can disable the test and create an issue / PR to fix it, so it doesn't disappear completely.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something that confused me a little here was actually which Xcode version Actions is using. I see macos-latest in the yml, but the release build passes. And release currently fails in Xcode 12 due to the conflicting architecture situation, unless there's something in this changeset that fixes it that I'm missing?

Every build is using Xcode 12 latest (default), Release is using 11 because of what you mentioned:

env:
DEVELOPER_DIR: ${{ env.CI_XCODE_VER }}

EDIT: Perhaps we can disable the test and create an issue / PR to fix it, so it doesn't disappear completely.

I agree with this process

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh yes! I missed that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shot in the dark, but this might be as simple as a group entitlement issue, https://developer.apple.com/documentation/bundleresources/entitlements/com_apple_security_application-groups

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Especially since the way entitlements are used is something that's always changing

@cbaker6
Copy link
Contributor Author

cbaker6 commented Jan 5, 2021

It would be so cool if they went away. I'm curious as to why you think waitForResult would act funny? Not doubting it, just interested in your reasoning.

I thought this at the time because switching an async (that didn't need to be an async) to a sync resolved an issue. After more observation, I have an improved guess... The resources allocated by GitHub Actions or any of the other environments vary. The async tests are timed and sometimes our tests are running with lower allocation, hitting the time limits of waitForResult. You can see this by looking at the different tests that "pass", sometimes they get really close to the time limit, sometimes they don't. This would explain the random failures. I don't know of a good way to circumvent this is extending the time could create different issues when a test is really broken. It might just be something to be mindful of and rerun the tests

@drdaz
Copy link
Member

drdaz commented Jan 6, 2021

I also can't replicate the errors in CI right now... my build fails much earlier, claiming it can't find FBSDKLoginManager.h while building PFFacebookMobileAuthenticationProvider. And I can't quite see why.

I believe carthage update should fix this issue

Unrelated to this PR, but this is something that's long bothered me about this repo. We don't officially use Carthage as a dependency manager for development, but our code references dependency code in the Carthage checkout folder. Which happens to also contain our git submodules.

It smells funny to me.

I've never properly used Carthage for dependencies in a project, so I'm not sure if this is just 'how you do'.

@drdaz
Copy link
Member

drdaz commented Jan 6, 2021

I thought this at the time because switching an async (that didn't need to be an async) to a sync resolved an issue. After more observation, I have an improved guess... The resources allocated by GitHub Actions or any of the other environments vary. The async tests are timed and sometimes our tests are running with lower allocation, hitting the time limits of waitForResult. You can see this by looking at the different tests that "pass", sometimes they get really close to the time limit, sometimes they don't. This would explain the random failures. I don't know of a good way to circumvent this is extending the time could create different issues when a test is really broken. It might just be something to be mindful of and rerun the tests

Looking at waitForResult, I can't see that it has an explicit time limit right now:

- (id)waitForResult:(NSError **)error {
    return [self waitForResult:error withMainThreadWarning:YES];
}

- (id)waitForResult:(NSError **)error withMainThreadWarning:(BOOL)warningEnabled {
    if (warningEnabled) {
        [self waitUntilFinished];
    } else {
        dispatch_semaphore_t semaphore = dispatch_semaphore_create(0);
        [self continueWithBlock:^id(BFTask *task) {
            dispatch_semaphore_signal(semaphore);
            return nil;
        }];
        dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
    }
    if (self.cancelled) {
        return nil;
    }
    if (self.error && error) {
        *error = self.error;
    }
    return self.result;
}

Where are you finding the time limit?

@cbaker6
Copy link
Contributor Author

cbaker6 commented Jan 6, 2021

I thought this at the time because switching an async (that didn't need to be an async) to a sync resolved an issue. After more observation, I have an improved guess... The resources allocated by GitHub Actions or any of the other environments vary. The async tests are timed and sometimes our tests are running with lower allocation, hitting the time limits of waitForResult. You can see this by looking at the different tests that "pass", sometimes they get really close to the time limit, sometimes they don't. This would explain the random failures. I don't know of a good way to circumvent this is extending the time could create different issues when a test is really broken. It might just be something to be mindful of and rerun the tests

Looking at waitForResult, I can't see that it has an explicit time limit right now:

- (id)waitForResult:(NSError **)error {
    return [self waitForResult:error withMainThreadWarning:YES];
}

- (id)waitForResult:(NSError **)error withMainThreadWarning:(BOOL)warningEnabled {
    if (warningEnabled) {
        [self waitUntilFinished];
    } else {
        dispatch_semaphore_t semaphore = dispatch_semaphore_create(0);
        [self continueWithBlock:^id(BFTask *task) {
            dispatch_semaphore_signal(semaphore);
            return nil;
        }];
        dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
    }
    if (self.cancelled) {
        return nil;
    }
    if (self.error && error) {
        *error = self.error;
    }
    return self.result;
}

Where are you finding the time limit?

This is what I meant

- (void)waitForTestExpectations {
[self waitForExpectationsWithTimeout:25.0 handler:nil];
}

Twitter and Facebook Utils have their own

@cbaker6
Copy link
Contributor Author

cbaker6 commented Jan 6, 2021

I'm starting to believe even more that when the tests fail it's a timeout/resource issue. If you look at the tests that pass (I'm referring to iOS, but I believe it's the same for the others), they finish ~6 minutes in Actions. The tests that fail are ~10 minutes. We can tell this is not the tests getting stuck and increasing the times by looking at the time difference of "Submodules and Bundle Install" in Actions between passing and failing tests. Failing tests have a significantly larger time before the tests even begin. Leading me to believe that particular run has less resources that may be causing the tests to fail. This would make sense out of the tests not failing on our local machines.

I have no proof this is the case...

@drdaz
Copy link
Member

drdaz commented Jan 6, 2021

Yeah I see what you mean.

The last failure in testMigratingDataFromMainSandbox (one of the 'regulars') is a crash though. There's also an ominous comment at the top of the file:

//TODO: (nlutsenko,richardross) These tests are extremely flaky, we should update and re-enable them.

I suspect there could be multiple sources of test jank 😃🎉

@cbaker6 cbaker6 merged commit 9771d1f into master Jan 6, 2021
@cbaker6 cbaker6 deleted the githubactions branch January 6, 2021 19:04
@cbaker6 cbaker6 mentioned this pull request Mar 8, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants