Skip to content

Allow manual specification of filewatcher behavior #29

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Feb 6, 2024

Conversation

dfangl
Copy link
Member

@dfangl dfangl commented Jan 30, 2024

Motivation

As seen with PR #28, there are some performance problems with the polling filewatcher.

It seems, that in newer macos versions, we have fsnotify events available as well. However, to avoid a braking change, and until we can verify what docker desktop version and which settings enable this, we are for now introducing an environment variable so users can manually override the automatic decision.

This will also be useful once we change the default behavior (e.g. with #28), to give users an option to go back to the current behavior.

Changes

  • Introduce new LOCALSTACK_FILE_WATCHER_STRATEGY config, with to possible options: event and polling. Event uses file system events, polling our poller like currently on Docker Desktop. Specifying this option will override the default handling, if it is empty, the current behavior is preserved. If set to something other than event or polling, the init binary will crash to indicate an incorrect configuration.

Copy link
Member

@dominikschubert dominikschubert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM with some minor nits. Let's give users a few options to fine-tune things :)

func NewChangeListener(debouncingInterval time.Duration) (*ChangeListener, error) {
watcher, err := filenotify.New(200 * time.Millisecond)
func NewChangeListener(debouncingInterval time.Duration, fileWatcher string) (*ChangeListener, error) {
watcher, err := filenotify.New(200*time.Millisecond, fileWatcher)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we maybe make the interval configurable as well while we're at it?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am kind of conflicted. On the one hand, it might make sense to do, if they want to increase it when running huge folders. On the other hand, I am not sure how much it will really impact.

@dfangl dfangl merged commit ba28a02 into localstack Feb 6, 2024
@dfangl dfangl deleted the poller-type-config branch February 6, 2024 09:52
Copy link
Member

@joe4dev joe4dev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍

Tested with tests.aws.services.lambda_.test_lambda_developer_tools.TestHotReloading.test_hot_reloading on macOS:

  • Both polling and event work (e.g., LAMBDA_DOCKER_FLAGS=-e LOCALSTACK_FILE_WATCHER_STRATEGY=event)
  • A wrong configuration does not crash the binary but just hangs 😒
  • I needed to bump the sleeps from 0.6 to 1 second to make the test pass on macOS

} else if fileWatcherStrategy == "polling" {
return NewPollingWatcher(interval), nil
} else {
log.Fatalf("Invalid filewatcher strategy %s. Only event and polling are allowed.\n", fileWatcherStrategy)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to crash the init binary rather than just log here?
Currently, it just hangs until a timeout kills the container :(

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A fatal log should indeed crash the init binary

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see.
Then it's likely one of the error cases we do not (yet) handle actively in the init. Reporting back to LocalStack would be nice in the future to short-circuit the timeout (which is likely high when debugging).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants