Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed bug with fiat value displayed as 0 for some tokens in receive field on Swap Page #21402

Merged
merged 1 commit into from
Oct 10, 2024

Conversation

vkjr
Copy link
Contributor

@vkjr vkjr commented Oct 8, 2024

fixes #21387

Summary

Fiat value now calculated for any asset selected as a swap target

Screen.Recording.2024-10-08.at.17.35.23.mov

Review notes

The underlying reason of the problem is a discrepancy of token data structure in different scenarios.
When we selecting list of assets component asset-list/view being displayed. That component takes tokens from different sources. It creates a list of "my tokens" taken from the :wallet/accounts key of re-frame db. And list of "popular tokens" received from :wallet/tokens key using :wallet/tokens-filtered subscriptions.
Fiat price calculation supposes that key :market-values-per-currency exists in every token data structure. And it exists for "my tokens" but doesn't in "popular tokens".

As a quick and dirty pre-release fix I added missed key to the token maps returned for "popular tokens".

But in a long term such issue raises a question: should we unify data structures and check their content for example with Malli? Discussion is very welcome.

Platforms

  • Android
  • iOS

Areas that maybe impacted

Functional
  • wallet / transactions

Steps to test

  • Recover user with available assets
  • Go to swap
  • Enter the available amount of assets in Assets to Pay field
  • Select some asset from "popular" section of assets lsit
  • make sure fiat value calculated correctly

status: ready

@status-im-auto
Copy link
Member

status-im-auto commented Oct 8, 2024

Jenkins Builds

Click to see older builds (4)
Commit #️⃣ Finished (UTC) Duration Platform Result
✔️ 96cf388 #2 2024-10-08 16:58:57 ~4 min tests 📄log
✔️ 96cf388 #2 2024-10-08 17:01:58 ~7 min android-e2e 🤖apk 📲
✔️ 96cf388 #2 2024-10-08 17:02:25 ~8 min android 🤖apk 📲
✔️ 96cf388 #2 2024-10-08 17:03:13 ~8 min ios 📱ipa 📲
Commit #️⃣ Finished (UTC) Duration Platform Result
✔️ 6726721 #3 2024-10-09 10:31:26 ~4 min tests 📄log
✔️ 6726721 #3 2024-10-09 10:34:25 ~7 min android 🤖apk 📲
✔️ 6726721 #3 2024-10-09 10:34:58 ~8 min android-e2e 🤖apk 📲
✔️ 6726721 #3 2024-10-09 10:35:32 ~8 min ios 📱ipa 📲
✔️ 0dca690 #4 2024-10-10 10:05:35 ~4 min tests 📄log
✔️ 0dca690 #4 2024-10-10 10:06:58 ~6 min android-e2e 🤖apk 📲
✔️ 0dca690 #4 2024-10-10 10:09:09 ~8 min android 🤖apk 📲
✔️ 0dca690 #4 2024-10-10 10:09:50 ~9 min ios 📱ipa 📲

Copy link
Contributor

@ulisesmac ulisesmac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the PR @vkjr

Looks good 👍

Copy link
Member

@clauxx clauxx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, looks good!

But in a long term such issue raises a question: should we unify data structures and check their content for example with Malli? Discussion is very welcome.

This is a very good point IMO. It's difficult to know what token is, cause it comes in different shapes depending where you look. In one example I noticed, we're assoc-ing keys to a token and immediately pass it to a util function, which uses one of the new keys. This means that it will only work on a subset of tokens, and it's not always clear whether the token you have will work on it. Would be great to have some primitive data structures for things like token, network, transaction, etc. Could also be a topic we can discuss for the next DX call.

@Horupa-Olena Horupa-Olena self-assigned this Oct 9, 2024
@status-im-auto
Copy link
Member

100% of end-end tests have passed

Total executed tests: 7
Failed tests: 0
Expected to fail tests: 0
Passed tests: 7

Passed tests (7)

Click to expand

Class TestWalletOneDevice:

1. test_wallet_add_remove_regular_account, id: 727231
Device sessions

Class TestCommunityMultipleDeviceMerged:

1. test_community_message_edit, id: 702843
Device sessions

Class TestWalletMultipleDevice:

1. test_wallet_send_asset_from_drawer, id: 727230
2. test_wallet_send_eth, id: 727229

Class TestCommunityOneDeviceMerged:

1. test_community_copy_and_paste_message_in_chat_input, id: 702742
Device sessions

2. test_restore_multiaccount_with_waku_backup_remove_switch, id: 703133
Device sessions

Class TestOneToOneChatMultipleSharedDevicesNewUi:

1. test_1_1_chat_non_latin_messages_stack_update_profile_photo, id: 702745
Device sessions

@vkjr
Copy link
Contributor Author

vkjr commented Oct 9, 2024

@clauxx, my thoughts exactly! I already added it to the list of topics for discussion

(assoc :market-values
(get market-values-per-token (:symbol token)))
(assoc :details (get details-per-token (:symbol token))))))
(let [token-symbol (keyword (:symbol token))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with everybody mentioning malli can be a powerful ally. But more important than that to me is that we are not following even the basic practice and guideline of testing non-trivial subs.

The whole graph of non-trivial subs should be tested with deftest-sub, so that changes in the db layer can be caught up by different tests. The tests also provide an automatic runtime verification layer to make sure schemas are correct.

The tests would also provide live examples of data to play in the REPL. Overall, my comment is that malli schemas can help, but it would be good to first sort out the basics.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ilmotta, I fully agree with the point on testing.

One moment I can think of is that artificial data that we use as rf db placeholder in tests can become outdated when the db structure is modified. And if that happens, the subs tests will remain valid even if subs fail on real rf db with modified structure.

Wondering if we could consider having Malli schemas for rf db content. If we have it, we can validate that data in tests hasn't degraded. And also the data can be automatically generated from specs the first time when you writing tests and modified for the test purpose.
Wdyt?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vkjr @clauxx @seanstrom @ulisesmac I think it's about time we explore schemas for certain parts of the app-db. We have malli for a long time already in the repo. Malli is a proven solution in many Clojure projects. We don't need to use in the entire db, I wouldn't tbh.

For example, we might want to spec only parts that are changing very often to give us some protection in the middle of the battle. Or we might do the opposite to play safe now and only have schemas for parts of the app-db that are stable and proven. Adding schemas later, when implementation is done is like adding tests later, their value is greatly diminished. So the time is ripe!

I think the only way to protect us against stale schemas and stale data is to have quick integration level tests that call the RPC endpoints and validate the responses using the same schemas we use for unit tests and for development. If we connect all these dots we get a powerful system with runtime validation.

On a tangent, I shared with @seanstrom these days a dirty proof of concept to generate malli schemas from any Go struct. This is 100% possible. There's a chance we can use go generate to create EDN files with the schemas and help us in this endeavor to add schemas at the boundaries, which is where the Clojure masters recommend us we focus on.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Validating certain inputs of the app-db sounds good but probably not enough. Shouldn't we instead validate the subscription's output? IMO validating some basic subs would be better.

Again, I agree on not validating everything 👍

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Validating certain inputs of the app-db sounds good but probably not enough.

@ulisesmac I see schemas for the app-db would cause a positive, compounding effect in the rest of the system.

The first level is data arriving from status-go. That data usually end up being written to the app-db and become the results of root subscriptions. If we nail this part correctly we have a solid foundation to build the rest. Layer-2 subs benefit immediately from schemas for the app-db. Layer-3 subs we need to categorize in 2 groups at least. The first degree layer-3 subs (the ones using extractors) would also benefit from app-db schemas because the inputs would already be spec'ed. Many of these subs won't need new schemas for their output because many do simple transformations, like filtering.

The second degree layer-3 subs (is there a better name for this?) tend to be the most complicated in the app, they are the ones where inputs are other layer-3 subs. These subs would also benefit from schemas in the app-db. It's hard to explain, but here's my take: all the data from these subs is invariably derived from root subscriptions. If we have schemas for the app-db we will be able to heavily reuse parts of the schemas when describing input signals and output. For example, a layer-3 sub would add 2 keys to a map, and we would be able to reuse a schema from the app-db by using malli's :merge, or sometimes layer-3 subs just group and filter the data, but the results are still the same "things" we spec'ed in the app-db. I see reusability everywhere.

The other benefit of spec'ing the app-db is that it would protect the event layer, which is where a lot of critical stuff happens and where our first mistakes/misconceptions are made.

Okay, but my prediction is that it doesn't matter where we start, if we start to define schemas for subscriptions we will invariably, and eventually define schemas for the app-db. I imagine this: let's say I spec a layer-3 sub, but then I want to spec its inputs. But if I describe the inputs, then I already described the extractors' output. If I described the extractors' output, then I already described parts of the app-db. So in the end, we will eventually bubble up and start spec'ing the app-db, so why not start from there and rip more benefits?


My main concern is that if we don't use schemas to validate RPC endpoint responses we will eventually have a lot of red warnings saying the data don't conform to schemas and devs will start to say schemas are not worth maintaining because they only give guarantees when running the entire application and manually interacting with the app, which is true.

With quo components this problem is not that big of a deal because the data arriving to most components is not directly tied to domain entities.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The other benefit of schemas for the app-db is to generate samples of valid data for unit tests, for REPL experimentation and for generating data for quickly seeing how the UI would look like with some randomness and to check the performance.

I had a lot of fun generating random notifications, which are quite large maps.

@ulisesmac
Copy link
Contributor

Thanks, looks good!

But in a long term such issue raises a question: should we unify data structures and check their content for example with Malli? Discussion is very welcome.

This is a very good point IMO. It's difficult to know what token is, cause it comes in different shapes depending where you look. In one example I noticed, we're assoc-ing keys to a token and immediately pass it to a util function, which uses one of the new keys. This means that it will only work on a subset of tokens, and it's not always clear whether the token you have will work on it. Would be great to have some primitive data structures for things like token, network, transaction, etc. Could also be a topic we can discuss for the next DX call.

Afraid this leads to creating "objects" masked as maps.

This means that it will only work on a subset of tokens, and it's not always clear whether the token you have will work on it.

To me, this problem comes from the function defined, instead of assoc-ing keys to a token map, the function should have received them as different parameters.

E.g

(defn process-token
  [token extra-key-1 extra-key-2 ...]
  ...)

It's important to keep the consistency of names in our codebase. Since we are using a dynamically typed language, we should give proper names to our vars, we should call "token", "collectible" or "user" to things we know they actually are.

IMO, it's a matter of discipline in any dynamically typed language.

Just sharing my opinion.

@vkjr
Copy link
Contributor Author

vkjr commented Oct 9, 2024

@ulisesmac, thanks for sharing the opinion!
You wrote:

Afraid this leads to creating "objects" masked as maps.

Could you please elaborate a bit what exactly is concerning you? Because as I see it, @clauxx point of view of having "some primitive data structures for things like token, network, transaction" is basically the same thought that you have when saying "we should call "token", "collectible" or "user" to things we know they actually are." So it is about maintaining predictable shape of data when it is carried around.

And we definitely won't have encapsulation, inheritance or polymorphism here.

@ulisesmac
Copy link
Contributor

@vkjr Sure!

So it is about maintaining predictable shape of data when it is carried around.

Yeah, @clauxx and I are on the same page about it.

And we definitely won't have encapsulation, inheritance or polymorphism here.

You're right, I know we'll also lack of methods and state, I didn't want to say that we'll literally do OOP in Clojure. I probably picked the wrong words.

To make this clear:

I'm afraid that if we strictly validate the shape of the data across our chain of functions, we may end up adding more abstractions besides malli schemas.

Note: I'm not against validating the data, but I prefer to do it in the borders of the processing chain. e.g.

✅ We validate what we receive and what we send:

 (-> input
     (validation) ;; Validating the external input received
     (processing-1)
     (processing-2)
     (processing-3)
     ,,,
     (validation) ;; Optional, if we need to send this info to somewhere
     (sending))

❌ We internally validate each step

 (-> input
     (validation) ;; Validating the external input received
     (processing-1)
     (validation) ;; Validating that our internal functions return valid entities
     (processing-2)
     (validation)
     (processing-3)
     (validation)
     ,,,)

And when I say "adding more abstractions" I imagine something like:

  1. We start creating schemas to validate a token, a collectible, a user, etc.
  2. We realize we don't want to manually write all the entity maps, it's easier to add funcitons as create-token, create-user, etc. (it'd look as constructors).
  3. We start validating inputs and outputs on functions processing the entities (not OOP, but it'd look as typing the functions).
  4. We restrict the input and output of functions to receive and return certain entities and they're grouped by namespaces like token.utils, user.utils, etc, (it'd look as having static methods for a token or user class)

... And so on.

My main concern is: is this really needed? do we really need/want to validate entities across our processing functions?
I've worked in Clojure/Script codebases before and I haven't had the need of validating data beyond borders.

@clauxx said:

It's difficult to know what token is

I know it's frustrating that the naming discipline sometimes is lost, I've dealt with it before, what I do is connect a REPL, inspect and improve the code, it's not the best, I know, but I personally prefer that instead of "typing" Clojure.

@vkjr
Copy link
Contributor Author

vkjr commented Oct 10, 2024

@ulisesmac, I see you point, fair enough. And I totally on your side with this.
When I talk about validating some data structure, I mean making sure it contains all data it is required to have, not restricting possibility that some other data added/modified along the way.
For example in our codebase I'd be happy to see data in rf db validated, because if data has correct shape in rf db then it most probably will be also correct down the chain. And I agree that there is no need to validate on every step. You can invoke add more validation to different steps using repl during debugging and never let them into production.

As for having namespaces with functions that deal with specific structures - I think it is just convenient) We have 'controlled-input' namespace that works that way and I think it works well.

@Horupa-Olena
Copy link

@vkjr Thanks for your PR. All look great, you can merge it!

@vkjr
Copy link
Contributor Author

vkjr commented Oct 10, 2024

@Horupa-Olena, thanks for checking it! 🙏

@vkjr vkjr force-pushed the swap-fiat-value-bug branch from 6726721 to 0dca690 Compare October 10, 2024 10:00
@vkjr vkjr merged commit ad7d587 into develop Oct 10, 2024
6 checks passed
@vkjr vkjr deleted the swap-fiat-value-bug branch October 10, 2024 10:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

Fiat value displayed as 0 for some tokens in receive field on Swap Page
8 participants