-
Notifications
You must be signed in to change notification settings - Fork 991
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixed bug with fiat value displayed as 0 for some tokens in receive field on Swap Page #21402
Conversation
43e9f67
to
96cf388
Compare
Jenkins BuildsClick to see older builds (4)
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the PR @vkjr
Looks good 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, looks good!
But in a long term such issue raises a question: should we unify data structures and check their content for example with Malli? Discussion is very welcome.
This is a very good point IMO. It's difficult to know what token
is, cause it comes in different shapes depending where you look. In one example I noticed, we're assoc-ing keys to a token
and immediately pass it to a util function, which uses one of the new keys. This means that it will only work on a subset of tokens
, and it's not always clear whether the token you have will work on it. Would be great to have some primitive data structures for things like token
, network
, transaction
, etc. Could also be a topic we can discuss for the next DX call.
96cf388
to
6726721
Compare
100% of end-end tests have passed
Passed tests (7)Click to expandClass TestWalletOneDevice:
Class TestCommunityMultipleDeviceMerged:
Class TestWalletMultipleDevice:
Class TestCommunityOneDeviceMerged:
Class TestOneToOneChatMultipleSharedDevicesNewUi:
|
@clauxx, my thoughts exactly! I already added it to the list of topics for discussion |
(assoc :market-values | ||
(get market-values-per-token (:symbol token))) | ||
(assoc :details (get details-per-token (:symbol token)))))) | ||
(let [token-symbol (keyword (:symbol token)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with everybody mentioning malli can be a powerful ally. But more important than that to me is that we are not following even the basic practice and guideline of testing non-trivial subs.
The whole graph of non-trivial subs should be tested with deftest-sub
, so that changes in the db layer can be caught up by different tests. The tests also provide an automatic runtime verification layer to make sure schemas are correct.
The tests would also provide live examples of data to play in the REPL. Overall, my comment is that malli schemas can help, but it would be good to first sort out the basics.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ilmotta, I fully agree with the point on testing.
One moment I can think of is that artificial data that we use as rf db placeholder in tests can become outdated when the db structure is modified. And if that happens, the subs tests will remain valid even if subs fail on real rf db with modified structure.
Wondering if we could consider having Malli schemas for rf db content. If we have it, we can validate that data in tests hasn't degraded. And also the data can be automatically generated from specs the first time when you writing tests and modified for the test purpose.
Wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vkjr @clauxx @seanstrom @ulisesmac I think it's about time we explore schemas for certain parts of the app-db. We have malli for a long time already in the repo. Malli is a proven solution in many Clojure projects. We don't need to use in the entire db, I wouldn't tbh.
For example, we might want to spec only parts that are changing very often to give us some protection in the middle of the battle. Or we might do the opposite to play safe now and only have schemas for parts of the app-db that are stable and proven. Adding schemas later, when implementation is done is like adding tests later, their value is greatly diminished. So the time is ripe!
I think the only way to protect us against stale schemas and stale data is to have quick integration level tests that call the RPC endpoints and validate the responses using the same schemas we use for unit tests and for development. If we connect all these dots we get a powerful system with runtime validation.
On a tangent, I shared with @seanstrom these days a dirty proof of concept to generate malli schemas from any Go struct. This is 100% possible. There's a chance we can use go generate to create EDN files with the schemas and help us in this endeavor to add schemas at the boundaries, which is where the Clojure masters recommend us we focus on.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Validating certain inputs of the app-db sounds good but probably not enough. Shouldn't we instead validate the subscription's output? IMO validating some basic subs would be better.
Again, I agree on not validating everything 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Validating certain inputs of the app-db sounds good but probably not enough.
@ulisesmac I see schemas for the app-db would cause a positive, compounding effect in the rest of the system.
The first level is data arriving from status-go. That data usually end up being written to the app-db and become the results of root subscriptions. If we nail this part correctly we have a solid foundation to build the rest. Layer-2 subs benefit immediately from schemas for the app-db. Layer-3 subs we need to categorize in 2 groups at least. The first degree layer-3 subs (the ones using extractors) would also benefit from app-db schemas because the inputs would already be spec'ed. Many of these subs won't need new schemas for their output because many do simple transformations, like filtering.
The second degree layer-3 subs (is there a better name for this?) tend to be the most complicated in the app, they are the ones where inputs are other layer-3 subs. These subs would also benefit from schemas in the app-db. It's hard to explain, but here's my take: all the data from these subs is invariably derived from root subscriptions. If we have schemas for the app-db we will be able to heavily reuse parts of the schemas when describing input signals and output. For example, a layer-3 sub would add 2 keys to a map, and we would be able to reuse a schema from the app-db by using malli's :merge
, or sometimes layer-3 subs just group and filter the data, but the results are still the same "things" we spec'ed in the app-db. I see reusability everywhere.
The other benefit of spec'ing the app-db is that it would protect the event layer, which is where a lot of critical stuff happens and where our first mistakes/misconceptions are made.
Okay, but my prediction is that it doesn't matter where we start, if we start to define schemas for subscriptions we will invariably, and eventually define schemas for the app-db. I imagine this: let's say I spec a layer-3 sub, but then I want to spec its inputs. But if I describe the inputs, then I already described the extractors' output. If I described the extractors' output, then I already described parts of the app-db. So in the end, we will eventually bubble up and start spec'ing the app-db, so why not start from there and rip more benefits?
My main concern is that if we don't use schemas to validate RPC endpoint responses we will eventually have a lot of red warnings saying the data don't conform to schemas and devs will start to say schemas are not worth maintaining because they only give guarantees when running the entire application and manually interacting with the app, which is true.
With quo components this problem is not that big of a deal because the data arriving to most components is not directly tied to domain entities.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The other benefit of schemas for the app-db is to generate samples of valid data for unit tests, for REPL experimentation and for generating data for quickly seeing how the UI would look like with some randomness and to check the performance.
I had a lot of fun generating random notifications, which are quite large maps.
Afraid this leads to creating "objects" masked as maps.
To me, this problem comes from the function defined, instead of assoc-ing keys to a token map, the function should have received them as different parameters. E.g
It's important to keep the consistency of names in our codebase. Since we are using a dynamically typed language, we should give proper names to our vars, we should call "token", "collectible" or "user" to things we know they actually are. IMO, it's a matter of discipline in any dynamically typed language. Just sharing my opinion. |
@ulisesmac, thanks for sharing the opinion!
Could you please elaborate a bit what exactly is concerning you? Because as I see it, @clauxx point of view of having "some primitive data structures for things like token, network, transaction" is basically the same thought that you have when saying "we should call "token", "collectible" or "user" to things we know they actually are." So it is about maintaining predictable shape of data when it is carried around. And we definitely won't have encapsulation, inheritance or polymorphism here. |
@vkjr Sure!
Yeah, @clauxx and I are on the same page about it.
You're right, I know we'll also lack of methods and state, I didn't want to say that we'll literally do OOP in Clojure. I probably picked the wrong words. To make this clear: I'm afraid that if we strictly validate the shape of the data across our chain of functions, we may end up adding more abstractions besides malli schemas. Note: I'm not against validating the data, but I prefer to do it in the borders of the processing chain. e.g. ✅ We validate what we receive and what we send: (-> input
(validation) ;; Validating the external input received
(processing-1)
(processing-2)
(processing-3)
,,,
(validation) ;; Optional, if we need to send this info to somewhere
(sending)) ❌ We internally validate each step (-> input
(validation) ;; Validating the external input received
(processing-1)
(validation) ;; Validating that our internal functions return valid entities
(processing-2)
(validation)
(processing-3)
(validation)
,,,) And when I say "adding more abstractions" I imagine something like:
... And so on. My main concern is: is this really needed? do we really need/want to validate entities across our processing functions? @clauxx said:
I know it's frustrating that the naming discipline sometimes is lost, I've dealt with it before, what I do is connect a REPL, inspect and improve the code, it's not the best, I know, but I personally prefer that instead of "typing" Clojure. |
@ulisesmac, I see you point, fair enough. And I totally on your side with this. As for having namespaces with functions that deal with specific structures - I think it is just convenient) We have 'controlled-input' namespace that works that way and I think it works well. |
@vkjr Thanks for your PR. All look great, you can merge it! |
@Horupa-Olena, thanks for checking it! 🙏 |
6726721
to
0dca690
Compare
fixes #21387
Summary
Fiat value now calculated for any asset selected as a swap target
Screen.Recording.2024-10-08.at.17.35.23.mov
Review notes
The underlying reason of the problem is a discrepancy of
token
data structure in different scenarios.When we selecting list of assets component
asset-list/view
being displayed. That component takes tokens from different sources. It creates a list of "my tokens" taken from the:wallet/accounts
key of re-frame db. And list of "popular tokens" received from:wallet/tokens
key using:wallet/tokens-filtered
subscriptions.Fiat price calculation supposes that key
:market-values-per-currency
exists in every token data structure. And it exists for "my tokens" but doesn't in "popular tokens".As a quick and dirty pre-release fix I added missed key to the token maps returned for "popular tokens".
But in a long term such issue raises a question: should we unify data structures and check their content for example with Malli? Discussion is very welcome.
Platforms
Areas that maybe impacted
Functional
Steps to test
status: ready