Skip to content

NC | 5.18.4 backports #9027

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 25 commits into from
May 21, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
9403388
NC | lifecycle | fix notifications
nadavMiz Mar 27, 2025
5bd5e55
NC | lifecycle | continue last run
nadavMiz Apr 1, 2025
58d1fad
NC | Lifecycle | Convert NCLifecycle to a class
romayalon Mar 25, 2025
3f85532
Added a fix for bucket lifecycle - if tagging is an empty array in fi…
achouhan09 Apr 7, 2025
cbf30b9
NC | replace invalid type error hard coded command types list with a …
romayalon Apr 9, 2025
6d833de
File Reader | Add line_bytes_offset
romayalon Apr 9, 2025
1c79aa0
1. Added required validations in lifecycle rules for filter, expirati…
achouhan09 Mar 27, 2025
3d6baec
Removed the ceph s3 faulty test - test_lifecycle_expiration_tags1
achouhan09 Apr 14, 2025
da6a6ed
NC | CI | fix lifecycle timeout flaky test
romayalon Apr 14, 2025
529ba07
NC | Lifecycle | GPFS ILM policies integration
romayalon Mar 25, 2025
86d4bf2
NC | lifecycle | add newer noncurrent versions rule
nadavMiz Mar 17, 2025
254cdb0
NC | add expire delete marker rule
nadavMiz Apr 9, 2025
3da3318
NC | Add non current timestamp xattr support
romayalon Apr 20, 2025
83ac850
NC | lifecycle | Add Tests in POSIX Integration Tests
shirady Apr 24, 2025
4c45373
NC | lifecycle | Add Tests in POSIX Integration Tests - Part 2
shirady Apr 27, 2025
e007e91
NC | delete object filter verification on regular delete object (dele…
romayalon Apr 8, 2025
5f5ac13
NC | Lifecycle | Adjust expire/noncurrent state properties to GPFS flow
romayalon Apr 29, 2025
9c76222
NC | lifecycle | add noncurrent days rule
nadavMiz Apr 27, 2025
3920d7d
NC | lifecycle | Add Tests in POSIX Integration Tests - Part 3
shirady Apr 29, 2025
d0ab1b0
NC | lifecycle | remove key_marker and version_marker from state when…
nadavMiz Apr 27, 2025
be5640e
NC | lifecycle | small GPFS flow fixes
romayalon May 4, 2025
61e1c70
NC | lifecycle | fix expire-delete-marker issues
nadavMiz May 5, 2025
7d2d445
NC | _parse_key_from_line remove redundant / at the beginning of the …
romayalon May 13, 2025
e1658ce
add support for reserved bucket tags
tangledbytes Apr 9, 2025
a793873
Manual add stat_ignore_enoent() to 5.18.4
romayalon May 15, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 28 additions & 1 deletion config.js
Original file line number Diff line number Diff line change
Expand Up @@ -951,6 +951,33 @@ config.NSFS_GLACIER_FORCE_EXPIRE_ON_GET = false;
// interval
config.NSFS_GLACIER_MIGRATE_LOG_THRESHOLD = 50 * 1024;

/**
* NSFS_GLACIER_RESERVED_BUCKET_TAGS defines an object of bucket tags which will be reserved
* by the system and PUT operations for them via S3 API would be limited - as in they would be
* mutable only if specified and only under certain conditions.
*
* @type {Record<string, {
* schema: Record<any, any> & { $id: string },
* immutable: true | false | 'if-data',
* default: any,
* event: boolean
* }>}
*
* @example
* {
'deep-archive-copies': {
schema: {
$id: 'deep-archive-copies-schema-v0',
enum: ['1', '2']
}, // JSON Schema
immutable: 'if-data',
default: '1',
event: true
}
* }
*/
config.NSFS_GLACIER_RESERVED_BUCKET_TAGS = {};

// anonymous account name
config.ANONYMOUS_ACCOUNT_NAME = 'anonymous';

Expand Down Expand Up @@ -1030,7 +1057,7 @@ config.NC_LIFECYCLE_TZ = 'LOCAL';
config.NC_LIFECYCLE_LIST_BATCH_SIZE = 1000;
config.NC_LIFECYCLE_BUCKET_BATCH_SIZE = 10000;

config.NC_LIFECYCLE_GPFS_ILM_ENABLED = false;
config.NC_LIFECYCLE_GPFS_ILM_ENABLED = true;
////////// GPFS //////////
config.GPFS_DOWN_DELAY = 1000;

Expand Down
2 changes: 2 additions & 0 deletions docs/NooBaaNonContainerized/CI&Tests.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,6 +114,8 @@ The following is a list of `NC jest tests` files -
17. `test_nc_upgrade_manager.test.js` - Tests of the NC upgrade manager.
18. `test_cli_upgrade.test.js` - Tests of the upgrade CLI commands.
19. `test_nc_online_upgrade_cli_integrations.test.js` - Tests CLI commands during mocked config directory upgrade.
21. `test_nc_lifecycle_posix_integration.test` - Tests NC lifecycle POSIX related configuration.
(Note: in this layer we do not test the validation related to lifecycle configuration and it is done in `test_lifecycle.js` - which currently is running only in containerized deployment, but it is mutual code)

#### nc_index.js File
* The `nc_index.js` is a file that runs several NC and NSFS mocha related tests.
Expand Down
12 changes: 10 additions & 2 deletions docs/NooBaaNonContainerized/Events.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,10 @@ The following list includes events that indicate on a normal / successful operat
- Description: NooBaa account was deleted successfully using NooBaa CLI.

#### 4. `noobaa_bucket_created`
- Arguments: `bucket_name`
- Arguments:
- `bucket_name`
- `account_name`
- `<tag_value>` (if `event` is `true` for the reserved tag)
- Description: NooBaa bucket was created successfully using NooBaa CLI or S3.

#### 5. `noobaa_bucket_deleted`
Expand All @@ -43,6 +46,11 @@ The following list includes events that indicate on a normal / successful operat
- Arguments: `whitelist_ips`
- Description: Whitelist Server IPs updated successfully using NooBaa CLI.

#### 7. `noobaa_bucket_reserved_tag_modified`
- Arguments:
- `bucket_name`
- `<tag_value>` (if `event` is `true` for the reserved tag)
- Description: NooBaa bucket reserved tag was modified successfully using NooBaa CLI or S3.

### Error Indicating Events

Expand Down Expand Up @@ -219,4 +227,4 @@ The following list includes events that indicate on some sort of malfunction or
- Reasons:
- Free space in notification log dir FS is below threshold.
- Resolutions:
- Free up space is FS.
- Free up space is FS.
7 changes: 7 additions & 0 deletions docs/NooBaaNonContainerized/NooBaaCLI.md
Original file line number Diff line number Diff line change
Expand Up @@ -376,6 +376,13 @@ noobaa-cli bucket update --name <bucket_name> [--new_name] [--owner]
- Type: Boolean
- Description: Set the bucket to force md5 ETag calculation.

- `tag`
- Type: String
- Description: Set the bucket tags, type is a string of valid JSON. Behaviour is similar to `put-bucket-tagging` S3 API.

- `merge_tag`
- Type: String
- Description: Merge the bucket tags with previous bucket tags, type is a string of valid JSON.

### Bucket Status

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -54,4 +54,5 @@ change in our repo) - stopped passing between the update of commit hash 6861c3d8
| test_get_bucket_encryption_s3 | Faulty Test | [613](https://github.com/ceph/s3-tests/issues/613) |
| test_get_bucket_encryption_kms | Faulty Test | [613](https://github.com/ceph/s3-tests/issues/613) |
| test_delete_bucket_encryption_s3 | Faulty Test | [613](https://github.com/ceph/s3-tests/issues/613) |
| test_delete_bucket_encryption_kms | Faulty Test | [613](https://github.com/ceph/s3-tests/issues/613) |
| test_delete_bucket_encryption_kms | Faulty Test | [613](https://github.com/ceph/s3-tests/issues/613) |
| test_lifecycle_expiration_tags1 | Faulty Test | [638](https://github.com/ceph/s3-tests/issues/638) | There can be more such tests having the same issue (`Filter` is not aligned with aws structure in bucket lifecycle configuration) |
110 changes: 87 additions & 23 deletions src/cmd/manage_nsfs.js
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ const { account_id_cache } = require('../sdk/accountspace_fs');
const ManageCLIError = require('../manage_nsfs/manage_nsfs_cli_errors').ManageCLIError;
const ManageCLIResponse = require('../manage_nsfs/manage_nsfs_cli_responses').ManageCLIResponse;
const manage_nsfs_glacier = require('../manage_nsfs/manage_nsfs_glacier');
const noobaa_cli_lifecycle = require('../manage_nsfs/nc_lifecycle');
const { NCLifecycle } = require('../manage_nsfs/nc_lifecycle');
const manage_nsfs_logging = require('../manage_nsfs/manage_nsfs_logging');
const noobaa_cli_diagnose = require('../manage_nsfs/diagnose');
const noobaa_cli_upgrade = require('../manage_nsfs/upgrade');
Expand All @@ -40,6 +40,8 @@ const { throw_cli_error, get_bucket_owner_account_by_name,
const manage_nsfs_validations = require('../manage_nsfs/manage_nsfs_validations');
const nc_mkm = require('../manage_nsfs/nc_master_key_manager').get_instance();
const notifications_util = require('../util/notifications_util');
const BucketSpaceFS = require('../sdk/bucketspace_fs');
const NoobaaEvent = require('../manage_nsfs/manage_nsfs_events_utils').NoobaaEvent;

let config_fs;

Expand Down Expand Up @@ -123,7 +125,6 @@ async function fetch_bucket_data(action, user_input) {
force_md5_etag: user_input.force_md5_etag === undefined || user_input.force_md5_etag === '' ? user_input.force_md5_etag : get_boolean_or_string_value(user_input.force_md5_etag),
notifications: user_input.notifications
};

if (user_input.bucket_policy !== undefined) {
if (typeof user_input.bucket_policy === 'string') {
// bucket_policy deletion specified with empty string ''
Expand All @@ -142,6 +143,27 @@ async function fetch_bucket_data(action, user_input) {
data = await merge_new_and_existing_config_data(data);
}

if ((action === ACTIONS.UPDATE && user_input.tag) || (action === ACTIONS.ADD)) {
const tags = JSON.parse(user_input.tag || '[]');
data.tag = BucketSpaceFS._merge_reserved_tags(
data.tag || BucketSpaceFS._default_bucket_tags(),
tags,
action === ACTIONS.ADD ? true : await _is_bucket_empty(data),
);
}

if ((action === ACTIONS.UPDATE && user_input.merge_tag) || (action === ACTIONS.ADD)) {
const merge_tags = JSON.parse(user_input.merge_tag || '[]');
data.tag = _.merge(
data.tag,
BucketSpaceFS._merge_reserved_tags(
data.tag || BucketSpaceFS._default_bucket_tags(),
merge_tags,
action === ACTIONS.ADD ? true : await _is_bucket_empty(data),
)
);
}

//if we're updating the owner, needs to override owner in file with the owner from user input.
//if we're adding a bucket, need to set its owner id field
if ((action === ACTIONS.UPDATE && user_input.owner) || (action === ACTIONS.ADD)) {
Expand Down Expand Up @@ -189,7 +211,14 @@ async function add_bucket(data) {
data._id = mongo_utils.mongoObjectId();
const parsed_bucket_data = await config_fs.create_bucket_config_file(data);
await set_bucker_owner(parsed_bucket_data);
return { code: ManageCLIResponse.BucketCreated, detail: parsed_bucket_data, event_arg: { bucket: data.name }};

const [reserved_tag_event_args] = BucketSpaceFS._generate_reserved_tag_event_args({}, data.tag);

return {
code: ManageCLIResponse.BucketCreated,
detail: parsed_bucket_data,
event_arg: { ...(reserved_tag_event_args || {}), bucket: data.name, account: parsed_bucket_data.bucket_owner },
};
}

/**
Expand Down Expand Up @@ -245,25 +274,14 @@ async function update_bucket(data) {
*/
async function delete_bucket(data, force) {
try {
const temp_dir_name = native_fs_utils.get_bucket_tmpdir_name(data._id);
const bucket_empty = await _is_bucket_empty(data);
if (!bucket_empty && !force) {
throw_cli_error(ManageCLIError.BucketDeleteForbiddenHasObjects, data.name);
}

const bucket_temp_dir_path = native_fs_utils.get_bucket_tmpdir_full_path(data.path, data._id);
// fs_contexts for bucket temp dir (storage path)
const fs_context_fs_backend = native_fs_utils.get_process_fs_context(data.fs_backend);
let entries;
try {
entries = await nb_native().fs.readdir(fs_context_fs_backend, data.path);
} catch (err) {
dbg.warn(`delete_bucket: bucket name ${data.name},` +
`got an error on readdir with path: ${data.path}`, err);
// if the bucket's path was deleted first (encounter ENOENT error) - continue deletion
if (err.code !== 'ENOENT') throw err;
}
if (entries) {
const object_entries = entries.filter(element => !element.name.endsWith(temp_dir_name));
if (object_entries.length > 0 && !force) {
throw_cli_error(ManageCLIError.BucketDeleteForbiddenHasObjects, data.name);
}
}

await native_fs_utils.folder_delete(bucket_temp_dir_path, fs_context_fs_backend, true);
await config_fs.delete_bucket_config_file(data.name);
return { code: ManageCLIResponse.BucketDeleted, detail: { name: data.name }, event_arg: { bucket: data.name } };
Expand All @@ -273,6 +291,33 @@ async function delete_bucket(data, force) {
}
}

/**
* _is_bucket_empty returns true if the given bucket is empty
*
* @param {*} data
* @returns {Promise<boolean>}
*/
async function _is_bucket_empty(data) {
const temp_dir_name = native_fs_utils.get_bucket_tmpdir_name(data._id);
// fs_contexts for bucket temp dir (storage path)
const fs_context_fs_backend = native_fs_utils.get_process_fs_context(data.fs_backend);
let entries;
try {
entries = await nb_native().fs.readdir(fs_context_fs_backend, data.path);
} catch (err) {
dbg.warn(`_is_bucket_empty: bucket name ${data.name},` +
`got an error on readdir with path: ${data.path}`, err);
// if the bucket's path was deleted first (encounter ENOENT error) - continue deletion
if (err.code !== 'ENOENT') throw err;
}
if (entries) {
const object_entries = entries.filter(element => !element.name.endsWith(temp_dir_name));
return object_entries.length === 0;
}

return true;
}

/**
* bucket_management does the following -
* 1. fetches the bucket data if this is not a list operation
Expand All @@ -294,7 +339,24 @@ async function bucket_management(action, user_input) {
} else if (action === ACTIONS.STATUS) {
response = await get_bucket_status(data);
} else if (action === ACTIONS.UPDATE) {
response = await update_bucket(data);
const bucket_path = config_fs.get_bucket_path_by_name(user_input.name);
const bucket_lock_file = `${bucket_path}.lock`;
await native_fs_utils.lock_and_run(config_fs.fs_context, bucket_lock_file, async () => {
const prev_bucket_info = await fetch_bucket_data(action, _.omit(user_input, ['tag', 'merge_tag']));
const bucket_info = await fetch_bucket_data(action, user_input);

const tagging_object = BucketSpaceFS._objectify_tagging_arr(prev_bucket_info.tag);
const [
reserved_tag_event_args,
reserved_tag_modified,
] = BucketSpaceFS._generate_reserved_tag_event_args(tagging_object, bucket_info.tag);

response = await update_bucket(bucket_info);
if (reserved_tag_modified) {
new NoobaaEvent(NoobaaEvent.BUCKET_RESERVED_TAG_MODIFIED)
.create_event(undefined, { ...reserved_tag_event_args, bucket_name: user_input.name });
}
});
} else if (action === ACTIONS.DELETE) {
const force = get_boolean_or_string_value(user_input.force);
response = await delete_bucket(data, force);
Expand Down Expand Up @@ -814,9 +876,11 @@ async function lifecycle_management(args) {
const disable_service_validation = get_boolean_or_string_value(args.disable_service_validation);
const disable_runtime_validation = get_boolean_or_string_value(args.disable_runtime_validation);
const short_status = get_boolean_or_string_value(args.short_status);
const should_continue_last_run = get_boolean_or_string_value(args.continue);
try {
const options = { disable_service_validation, disable_runtime_validation, short_status };
const { should_run, lifecycle_run_status } = await noobaa_cli_lifecycle.run_lifecycle_under_lock(config_fs, options);
const options = { disable_service_validation, disable_runtime_validation, short_status, should_continue_last_run };
const nc_lifecycle = new NCLifecycle(config_fs, options);
const { should_run, lifecycle_run_status } = await nc_lifecycle.run_lifecycle_under_lock();
if (should_run) {
write_stdout_response(ManageCLIResponse.LifecycleSuccessful, lifecycle_run_status);
} else {
Expand Down
72 changes: 61 additions & 11 deletions src/endpoint/s3/ops/s3_put_bucket_lifecycle.js
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,63 @@ const S3Error = require('../s3_errors').S3Error;

const true_regex = /true/i;

/**
* validate_lifecycle_rule validates lifecycle rule structure and logical constraints
*
* validations:
* - ID must be ≤ MAX_RULE_ID_LENGTH
* - Status must be "Enabled" or "Disabled"
* - multiple Filters must be under "And"
* - only one Expiration field is allowed
* - Expiration.Date must be midnight UTC format
* - AbortIncompleteMultipartUpload cannot be combined with Tags or ObjectSize filters
*
* @param {Object} rule - lifecycle rule to validate
* @throws {S3Error} - on validation failure
*/
function validate_lifecycle_rule(rule) {

if (rule.ID?.length === 1 && rule.ID[0].length > s3_const.MAX_RULE_ID_LENGTH) {
dbg.error('Rule should not have ID length exceed allowed limit of ', s3_const.MAX_RULE_ID_LENGTH, ' characters', rule);
throw new S3Error({ ...S3Error.InvalidArgument, message: `ID length should not exceed allowed limit of ${s3_const.MAX_RULE_ID_LENGTH}` });
}

if (!rule.Status || rule.Status.length !== 1 ||
(rule.Status[0] !== s3_const.LIFECYCLE_STATUS.STAT_ENABLED && rule.Status[0] !== s3_const.LIFECYCLE_STATUS.STAT_DISABLED)) {
dbg.error(`Rule should have a status value of "${s3_const.LIFECYCLE_STATUS.STAT_ENABLED}" or "${s3_const.LIFECYCLE_STATUS.STAT_DISABLED}".`, rule);
throw new S3Error(S3Error.MalformedXML);
}

if (rule.Filter?.[0] && Object.keys(rule.Filter[0]).length > 1 && !rule.Filter[0]?.And) {
dbg.error('Rule should combine multiple filters using "And"', rule);
throw new S3Error(S3Error.MalformedXML);
}

if (rule.Expiration?.[0] && Object.keys(rule.Expiration[0]).length > 1) {
dbg.error('Rule should specify only one expiration field: Days, Date, or ExpiredObjectDeleteMarker', rule);
throw new S3Error(S3Error.MalformedXML);
}

if (rule.Expiration?.length === 1 && rule.Expiration[0]?.Date) {
const date = new Date(rule.Expiration[0].Date[0]);
if (isNaN(date.getTime()) || date.getTime() !== Date.UTC(date.getUTCFullYear(), date.getUTCMonth(), date.getUTCDate())) {
dbg.error('Date value must conform to the ISO 8601 format and at midnight UTC (00:00:00). Provided:', rule.Expiration[0].Date[0]);
throw new S3Error({ ...S3Error.InvalidArgument, message: "'Date' must be at midnight GMT" });
}
}

if (rule.AbortIncompleteMultipartUpload?.length === 1 && rule.Filter?.length === 1) {
if (rule.Filter[0]?.Tag) {
dbg.error('Rule should not include AbortIncompleteMultipartUpload with Tags', rule);
throw new S3Error({ ...S3Error.InvalidArgument, message: 'AbortIncompleteMultipartUpload cannot be specified with Tags' });
}
if (rule.Filter[0]?.ObjectSizeGreaterThan || rule.Filter[0]?.ObjectSizeLessThan) {
dbg.error('Rule should not include AbortIncompleteMultipartUpload with Object Size', rule);
throw new S3Error({ ...S3Error.InvalidArgument, message: 'AbortIncompleteMultipartUpload cannot be specified with Object Size' });
}
}
}

// parse lifecycle rule filter
function parse_filter(filter) {
const current_rule_filter = {};
Expand Down Expand Up @@ -89,13 +146,11 @@ async function put_bucket_lifecycle(req) {
filter: {},
};

// validate rule
validate_lifecycle_rule(rule);

if (rule.ID?.length === 1) {
if (rule.ID[0].length > s3_const.MAX_RULE_ID_LENGTH) {
dbg.error('Rule should not have ID length exceed allowed limit of ', s3_const.MAX_RULE_ID_LENGTH, ' characters', rule);
throw new S3Error({ ...S3Error.InvalidArgument, message: `ID length should not exceed allowed limit of ${s3_const.MAX_RULE_ID_LENGTH}` });
} else {
current_rule.id = rule.ID[0];
}
current_rule.id = rule.ID[0];
} else {
// Generate a random ID if missing
current_rule.id = crypto.randomUUID();
Expand All @@ -108,11 +163,6 @@ async function put_bucket_lifecycle(req) {
}
id_set.add(current_rule.id);

if (!rule.Status || rule.Status.length !== 1 ||
(rule.Status[0] !== s3_const.LIFECYCLE_STATUS.STAT_ENABLED && rule.Status[0] !== s3_const.LIFECYCLE_STATUS.STAT_DISABLED)) {
dbg.error(`Rule should have a status value of "${s3_const.LIFECYCLE_STATUS.STAT_ENABLED}" or "${s3_const.LIFECYCLE_STATUS.STAT_DISABLED}".`, rule);
throw new S3Error(S3Error.MalformedXML);
}
current_rule.status = rule.Status[0];

if (rule.Prefix) {
Expand Down
Loading